00:00:00.000 Started by upstream project "autotest-per-patch" build number 130928 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:08.586 The recommended git tool is: git 00:00:08.586 using credential 00000000-0000-0000-0000-000000000002 00:00:08.589 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:08.603 Fetching changes from the remote Git repository 00:00:08.607 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:08.620 Using shallow fetch with depth 1 00:00:08.620 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:08.620 > git --version # timeout=10 00:00:08.636 > git --version # 'git version 2.39.2' 00:00:08.636 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:08.650 Setting http proxy: proxy-dmz.intel.com:911 00:00:08.650 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:14.195 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.209 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.221 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:14.221 > git config core.sparsecheckout # timeout=10 00:00:14.233 > git read-tree -mu HEAD # timeout=10 00:00:14.252 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:14.269 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:14.269 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:14.362 [Pipeline] Start of Pipeline 00:00:14.377 [Pipeline] library 00:00:14.379 Loading library shm_lib@master 00:00:14.379 Library shm_lib@master is cached. Copying from home. 00:00:14.394 [Pipeline] node 00:00:14.406 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:14.407 [Pipeline] { 00:00:14.416 [Pipeline] catchError 00:00:14.416 [Pipeline] { 00:00:14.426 [Pipeline] wrap 00:00:14.433 [Pipeline] { 00:00:14.440 [Pipeline] stage 00:00:14.441 [Pipeline] { (Prologue) 00:00:14.656 [Pipeline] sh 00:00:14.945 + logger -p user.info -t JENKINS-CI 00:00:14.960 [Pipeline] echo 00:00:14.962 Node: GP8 00:00:14.969 [Pipeline] sh 00:00:15.264 [Pipeline] setCustomBuildProperty 00:00:15.272 [Pipeline] echo 00:00:15.274 Cleanup processes 00:00:15.278 [Pipeline] sh 00:00:15.565 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.565 1497009 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.578 [Pipeline] sh 00:00:15.862 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.863 ++ grep -v 'sudo pgrep' 00:00:15.863 ++ awk '{print $1}' 00:00:15.863 + sudo kill -9 00:00:15.863 + true 00:00:15.879 [Pipeline] cleanWs 00:00:15.889 [WS-CLEANUP] Deleting project workspace... 00:00:15.889 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.900 [WS-CLEANUP] done 00:00:15.904 [Pipeline] setCustomBuildProperty 00:00:15.920 [Pipeline] sh 00:00:16.209 + sudo git config --global --replace-all safe.directory '*' 00:00:16.284 [Pipeline] httpRequest 00:00:16.934 [Pipeline] echo 00:00:16.936 Sorcerer 10.211.164.101 is alive 00:00:16.944 [Pipeline] retry 00:00:16.946 [Pipeline] { 00:00:16.961 [Pipeline] httpRequest 00:00:16.966 HttpMethod: GET 00:00:16.966 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:16.966 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:17.000 Response Code: HTTP/1.1 200 OK 00:00:17.000 Success: Status code 200 is in the accepted range: 200,404 00:00:17.001 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:44.385 [Pipeline] } 00:00:44.401 [Pipeline] // retry 00:00:44.409 [Pipeline] sh 00:00:44.699 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:44.714 [Pipeline] httpRequest 00:00:45.122 [Pipeline] echo 00:00:45.124 Sorcerer 10.211.164.101 is alive 00:00:45.133 [Pipeline] retry 00:00:45.135 [Pipeline] { 00:00:45.148 [Pipeline] httpRequest 00:00:45.153 HttpMethod: GET 00:00:45.153 URL: http://10.211.164.101/packages/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:00:45.154 Sending request to url: http://10.211.164.101/packages/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:00:45.160 Response Code: HTTP/1.1 200 OK 00:00:45.161 Success: Status code 200 is in the accepted range: 200,404 00:00:45.161 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:03:42.790 [Pipeline] } 00:03:42.809 [Pipeline] // retry 00:03:42.819 [Pipeline] sh 00:03:43.148 + tar --no-same-owner -xf spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:03:49.746 [Pipeline] sh 00:03:50.028 + git -C spdk log --oneline -n5 00:03:50.028 716daf683 bdev/nvme: interrupt mode for PCIe nvme ctrlr 00:03:50.028 33a99df94 nvme: create, manage fd_group for nvme poll group 00:03:50.028 d49b794e4 thread: Extended options for spdk_interrupt_register 00:03:50.028 e2e9091fb util: allow a fd_group to manage all its fds 00:03:50.028 89fbd3ce7 util: fix total fds to wait for 00:03:50.039 [Pipeline] } 00:03:50.053 [Pipeline] // stage 00:03:50.061 [Pipeline] stage 00:03:50.063 [Pipeline] { (Prepare) 00:03:50.079 [Pipeline] writeFile 00:03:50.094 [Pipeline] sh 00:03:50.379 + logger -p user.info -t JENKINS-CI 00:03:50.391 [Pipeline] sh 00:03:50.677 + logger -p user.info -t JENKINS-CI 00:03:50.688 [Pipeline] sh 00:03:50.973 + cat autorun-spdk.conf 00:03:50.973 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:50.973 SPDK_TEST_NVMF=1 00:03:50.973 SPDK_TEST_NVME_CLI=1 00:03:50.973 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:50.973 SPDK_TEST_NVMF_NICS=e810 00:03:50.973 SPDK_TEST_VFIOUSER=1 00:03:50.973 SPDK_RUN_UBSAN=1 00:03:50.973 NET_TYPE=phy 00:03:50.981 RUN_NIGHTLY=0 00:03:50.985 [Pipeline] readFile 00:03:51.009 [Pipeline] withEnv 00:03:51.011 [Pipeline] { 00:03:51.024 [Pipeline] sh 00:03:51.312 + set -ex 00:03:51.312 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:51.312 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:51.312 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:51.312 ++ SPDK_TEST_NVMF=1 00:03:51.312 ++ SPDK_TEST_NVME_CLI=1 00:03:51.312 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:51.312 ++ SPDK_TEST_NVMF_NICS=e810 00:03:51.312 ++ SPDK_TEST_VFIOUSER=1 00:03:51.312 ++ SPDK_RUN_UBSAN=1 00:03:51.312 ++ NET_TYPE=phy 00:03:51.312 ++ RUN_NIGHTLY=0 00:03:51.312 + case $SPDK_TEST_NVMF_NICS in 00:03:51.312 + DRIVERS=ice 00:03:51.312 + [[ tcp == \r\d\m\a ]] 00:03:51.312 + [[ -n ice ]] 00:03:51.312 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:51.312 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:51.312 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:51.312 rmmod: ERROR: Module irdma is not currently loaded 00:03:51.312 rmmod: ERROR: Module i40iw is not currently loaded 00:03:51.312 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:51.312 + true 00:03:51.312 + for D in $DRIVERS 00:03:51.312 + sudo modprobe ice 00:03:51.312 + exit 0 00:03:51.321 [Pipeline] } 00:03:51.335 [Pipeline] // withEnv 00:03:51.340 [Pipeline] } 00:03:51.353 [Pipeline] // stage 00:03:51.362 [Pipeline] catchError 00:03:51.363 [Pipeline] { 00:03:51.376 [Pipeline] timeout 00:03:51.376 Timeout set to expire in 1 hr 0 min 00:03:51.377 [Pipeline] { 00:03:51.391 [Pipeline] stage 00:03:51.393 [Pipeline] { (Tests) 00:03:51.407 [Pipeline] sh 00:03:51.692 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.692 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.692 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.692 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:51.692 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.692 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:51.692 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:51.692 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:51.692 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:51.692 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:51.692 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:51.692 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.692 + source /etc/os-release 00:03:51.692 ++ NAME='Fedora Linux' 00:03:51.692 ++ VERSION='39 (Cloud Edition)' 00:03:51.692 ++ ID=fedora 00:03:51.692 ++ VERSION_ID=39 00:03:51.692 ++ VERSION_CODENAME= 00:03:51.692 ++ PLATFORM_ID=platform:f39 00:03:51.692 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:51.692 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:51.692 ++ LOGO=fedora-logo-icon 00:03:51.692 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:51.692 ++ HOME_URL=https://fedoraproject.org/ 00:03:51.692 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:51.692 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:51.692 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:51.692 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:51.692 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:51.692 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:51.692 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:51.692 ++ SUPPORT_END=2024-11-12 00:03:51.692 ++ VARIANT='Cloud Edition' 00:03:51.692 ++ VARIANT_ID=cloud 00:03:51.692 + uname -a 00:03:51.693 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:51.693 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.073 Hugepages 00:03:53.073 node hugesize free / total 00:03:53.073 node0 1048576kB 0 / 0 00:03:53.073 node0 2048kB 0 / 0 00:03:53.073 node1 1048576kB 0 / 0 00:03:53.073 node1 2048kB 0 / 0 00:03:53.073 00:03:53.073 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.073 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:53.073 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:53.073 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:53.073 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:53.073 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:53.332 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:53.332 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:53.332 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:53.332 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:53.332 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:53.332 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.332 + rm -f /tmp/spdk-ld-path 00:03:53.332 + source autorun-spdk.conf 00:03:53.332 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:53.332 ++ SPDK_TEST_NVMF=1 00:03:53.332 ++ SPDK_TEST_NVME_CLI=1 00:03:53.332 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:53.332 ++ SPDK_TEST_NVMF_NICS=e810 00:03:53.332 ++ SPDK_TEST_VFIOUSER=1 00:03:53.332 ++ SPDK_RUN_UBSAN=1 00:03:53.332 ++ NET_TYPE=phy 00:03:53.332 ++ RUN_NIGHTLY=0 00:03:53.332 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:53.332 + [[ -n '' ]] 00:03:53.332 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.332 + for M in /var/spdk/build-*-manifest.txt 00:03:53.332 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:53.332 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:53.332 + for M in /var/spdk/build-*-manifest.txt 00:03:53.332 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:53.332 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:53.332 + for M in /var/spdk/build-*-manifest.txt 00:03:53.332 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:53.332 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:53.332 ++ uname 00:03:53.332 + [[ Linux == \L\i\n\u\x ]] 00:03:53.332 + sudo dmesg -T 00:03:53.332 + sudo dmesg --clear 00:03:53.332 + dmesg_pid=1498336 00:03:53.332 + [[ Fedora Linux == FreeBSD ]] 00:03:53.332 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:53.332 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:53.332 + sudo dmesg -Tw 00:03:53.332 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:53.332 + [[ -x /usr/src/fio-static/fio ]] 00:03:53.332 + export FIO_BIN=/usr/src/fio-static/fio 00:03:53.332 + FIO_BIN=/usr/src/fio-static/fio 00:03:53.332 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:53.332 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:53.332 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:53.332 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:53.332 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:53.332 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:53.332 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:53.332 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:53.332 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:53.332 Test configuration: 00:03:53.332 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:53.332 SPDK_TEST_NVMF=1 00:03:53.332 SPDK_TEST_NVME_CLI=1 00:03:53.332 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:53.332 SPDK_TEST_NVMF_NICS=e810 00:03:53.332 SPDK_TEST_VFIOUSER=1 00:03:53.332 SPDK_RUN_UBSAN=1 00:03:53.332 NET_TYPE=phy 00:03:53.591 RUN_NIGHTLY=0 20:31:22 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:53.591 20:31:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.591 20:31:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:53.591 20:31:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:53.591 20:31:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.591 20:31:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.591 20:31:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.591 20:31:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.591 20:31:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.591 20:31:22 -- paths/export.sh@5 -- $ export PATH 00:03:53.591 20:31:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.591 20:31:22 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:53.591 20:31:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:53.591 20:31:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728412282.XXXXXX 00:03:53.591 20:31:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728412282.mHDEP4 00:03:53.591 20:31:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:53.591 20:31:22 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:53.591 20:31:22 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:53.591 20:31:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:53.591 20:31:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:53.591 20:31:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:53.591 20:31:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:53.591 20:31:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.591 20:31:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:53.591 20:31:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:53.591 20:31:22 -- pm/common@17 -- $ local monitor 00:03:53.591 20:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.591 20:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.591 20:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.591 20:31:22 -- pm/common@21 -- $ date +%s 00:03:53.591 20:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.591 20:31:22 -- pm/common@21 -- $ date +%s 00:03:53.591 20:31:22 -- pm/common@25 -- $ sleep 1 00:03:53.591 20:31:22 -- pm/common@21 -- $ date +%s 00:03:53.591 20:31:22 -- pm/common@21 -- $ date +%s 00:03:53.591 20:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728412282 00:03:53.591 20:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728412282 00:03:53.592 20:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728412282 00:03:53.592 20:31:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728412282 00:03:53.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728412282_collect-cpu-load.pm.log 00:03:53.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728412282_collect-vmstat.pm.log 00:03:53.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728412282_collect-cpu-temp.pm.log 00:03:53.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728412282_collect-bmc-pm.bmc.pm.log 00:03:54.526 20:31:23 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:54.526 20:31:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:54.526 20:31:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:54.526 20:31:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:54.526 20:31:23 -- spdk/autobuild.sh@16 -- $ date -u 00:03:54.526 Tue Oct 8 06:31:23 PM UTC 2024 00:03:54.527 20:31:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:54.527 v25.01-pre-53-g716daf683 00:03:54.527 20:31:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:54.527 20:31:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:54.527 20:31:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:54.527 20:31:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:54.527 20:31:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:54.527 20:31:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.527 ************************************ 00:03:54.527 START TEST ubsan 00:03:54.527 ************************************ 00:03:54.527 20:31:23 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:54.527 using ubsan 00:03:54.527 00:03:54.527 real 0m0.000s 00:03:54.527 user 0m0.000s 00:03:54.527 sys 0m0.000s 00:03:54.527 20:31:23 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:54.527 20:31:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:54.527 ************************************ 00:03:54.527 END TEST ubsan 00:03:54.527 ************************************ 00:03:54.527 20:31:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:54.527 20:31:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:54.527 20:31:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:54.527 20:31:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:54.785 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:54.785 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:55.352 Using 'verbs' RDMA provider 00:04:11.162 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:26.044 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:26.303 Creating mk/config.mk...done. 00:04:26.303 Creating mk/cc.flags.mk...done. 00:04:26.303 Type 'make' to build. 00:04:26.303 20:31:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:26.303 20:31:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:26.303 20:31:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:26.303 20:31:54 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.303 ************************************ 00:04:26.303 START TEST make 00:04:26.303 ************************************ 00:04:26.303 20:31:54 make -- common/autotest_common.sh@1125 -- $ make -j48 00:04:26.562 make[1]: Nothing to be done for 'all'. 00:04:28.495 The Meson build system 00:04:28.495 Version: 1.5.0 00:04:28.495 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:28.495 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:28.495 Build type: native build 00:04:28.495 Project name: libvfio-user 00:04:28.495 Project version: 0.0.1 00:04:28.495 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:28.495 C linker for the host machine: cc ld.bfd 2.40-14 00:04:28.495 Host machine cpu family: x86_64 00:04:28.495 Host machine cpu: x86_64 00:04:28.495 Run-time dependency threads found: YES 00:04:28.495 Library dl found: YES 00:04:28.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:28.495 Run-time dependency json-c found: YES 0.17 00:04:28.495 Run-time dependency cmocka found: YES 1.1.7 00:04:28.495 Program pytest-3 found: NO 00:04:28.495 Program flake8 found: NO 00:04:28.495 Program misspell-fixer found: NO 00:04:28.495 Program restructuredtext-lint found: NO 00:04:28.495 Program valgrind found: YES (/usr/bin/valgrind) 00:04:28.495 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:28.495 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:28.495 Compiler for C supports arguments -Wwrite-strings: YES 00:04:28.495 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:28.495 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:28.495 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:28.495 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:28.495 Build targets in project: 8 00:04:28.495 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:28.495 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:28.495 00:04:28.495 libvfio-user 0.0.1 00:04:28.495 00:04:28.495 User defined options 00:04:28.495 buildtype : debug 00:04:28.495 default_library: shared 00:04:28.495 libdir : /usr/local/lib 00:04:28.495 00:04:28.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:29.449 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:29.449 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:29.449 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:29.449 [3/37] Compiling C object samples/null.p/null.c.o 00:04:29.449 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:29.449 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:29.449 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:29.449 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:29.449 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:29.715 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:29.715 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:29.715 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:29.715 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:29.715 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:29.715 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:29.715 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:29.715 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:29.715 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:29.715 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:29.715 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:29.715 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:29.715 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:29.715 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:29.715 [23/37] Compiling C object samples/client.p/client.c.o 00:04:29.715 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:29.715 [25/37] Compiling C object samples/server.p/server.c.o 00:04:29.715 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:29.715 [27/37] Linking target samples/client 00:04:29.715 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:29.715 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:29.983 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:29.983 [31/37] Linking target test/unit_tests 00:04:29.983 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:29.983 [33/37] Linking target samples/null 00:04:29.983 [34/37] Linking target samples/gpio-pci-idio-16 00:04:29.983 [35/37] Linking target samples/server 00:04:29.983 [36/37] Linking target samples/lspci 00:04:29.983 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:30.245 INFO: autodetecting backend as ninja 00:04:30.245 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:30.245 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:31.188 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:31.188 ninja: no work to do. 00:04:36.465 The Meson build system 00:04:36.465 Version: 1.5.0 00:04:36.465 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:36.465 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:36.465 Build type: native build 00:04:36.465 Program cat found: YES (/usr/bin/cat) 00:04:36.465 Project name: DPDK 00:04:36.465 Project version: 24.03.0 00:04:36.465 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:36.465 C linker for the host machine: cc ld.bfd 2.40-14 00:04:36.465 Host machine cpu family: x86_64 00:04:36.465 Host machine cpu: x86_64 00:04:36.465 Message: ## Building in Developer Mode ## 00:04:36.465 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:36.465 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:36.465 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:36.465 Program python3 found: YES (/usr/bin/python3) 00:04:36.465 Program cat found: YES (/usr/bin/cat) 00:04:36.465 Compiler for C supports arguments -march=native: YES 00:04:36.465 Checking for size of "void *" : 8 00:04:36.465 Checking for size of "void *" : 8 (cached) 00:04:36.465 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:36.465 Library m found: YES 00:04:36.465 Library numa found: YES 00:04:36.465 Has header "numaif.h" : YES 00:04:36.465 Library fdt found: NO 00:04:36.465 Library execinfo found: NO 00:04:36.465 Has header "execinfo.h" : YES 00:04:36.465 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:36.465 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:36.465 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:36.465 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:36.465 Run-time dependency openssl found: YES 3.1.1 00:04:36.465 Run-time dependency libpcap found: YES 1.10.4 00:04:36.465 Has header "pcap.h" with dependency libpcap: YES 00:04:36.465 Compiler for C supports arguments -Wcast-qual: YES 00:04:36.465 Compiler for C supports arguments -Wdeprecated: YES 00:04:36.465 Compiler for C supports arguments -Wformat: YES 00:04:36.465 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:36.465 Compiler for C supports arguments -Wformat-security: NO 00:04:36.465 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:36.465 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:36.465 Compiler for C supports arguments -Wnested-externs: YES 00:04:36.465 Compiler for C supports arguments -Wold-style-definition: YES 00:04:36.465 Compiler for C supports arguments -Wpointer-arith: YES 00:04:36.465 Compiler for C supports arguments -Wsign-compare: YES 00:04:36.465 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:36.465 Compiler for C supports arguments -Wundef: YES 00:04:36.465 Compiler for C supports arguments -Wwrite-strings: YES 00:04:36.465 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:36.465 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:36.465 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:36.465 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:36.465 Program objdump found: YES (/usr/bin/objdump) 00:04:36.465 Compiler for C supports arguments -mavx512f: YES 00:04:36.465 Checking if "AVX512 checking" compiles: YES 00:04:36.465 Fetching value of define "__SSE4_2__" : 1 00:04:36.465 Fetching value of define "__AES__" : 1 00:04:36.465 Fetching value of define "__AVX__" : 1 00:04:36.465 Fetching value of define "__AVX2__" : (undefined) 00:04:36.465 Fetching value of define "__AVX512BW__" : (undefined) 00:04:36.465 Fetching value of define "__AVX512CD__" : (undefined) 00:04:36.465 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:36.465 Fetching value of define "__AVX512F__" : (undefined) 00:04:36.465 Fetching value of define "__AVX512VL__" : (undefined) 00:04:36.465 Fetching value of define "__PCLMUL__" : 1 00:04:36.465 Fetching value of define "__RDRND__" : 1 00:04:36.465 Fetching value of define "__RDSEED__" : (undefined) 00:04:36.465 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:36.465 Fetching value of define "__znver1__" : (undefined) 00:04:36.465 Fetching value of define "__znver2__" : (undefined) 00:04:36.465 Fetching value of define "__znver3__" : (undefined) 00:04:36.465 Fetching value of define "__znver4__" : (undefined) 00:04:36.465 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:36.465 Message: lib/log: Defining dependency "log" 00:04:36.465 Message: lib/kvargs: Defining dependency "kvargs" 00:04:36.465 Message: lib/telemetry: Defining dependency "telemetry" 00:04:36.465 Checking for function "getentropy" : NO 00:04:36.465 Message: lib/eal: Defining dependency "eal" 00:04:36.465 Message: lib/ring: Defining dependency "ring" 00:04:36.465 Message: lib/rcu: Defining dependency "rcu" 00:04:36.465 Message: lib/mempool: Defining dependency "mempool" 00:04:36.465 Message: lib/mbuf: Defining dependency "mbuf" 00:04:36.465 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:36.465 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:36.465 Compiler for C supports arguments -mpclmul: YES 00:04:36.465 Compiler for C supports arguments -maes: YES 00:04:36.465 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:36.465 Compiler for C supports arguments -mavx512bw: YES 00:04:36.465 Compiler for C supports arguments -mavx512dq: YES 00:04:36.465 Compiler for C supports arguments -mavx512vl: YES 00:04:36.465 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:36.465 Compiler for C supports arguments -mavx2: YES 00:04:36.465 Compiler for C supports arguments -mavx: YES 00:04:36.465 Message: lib/net: Defining dependency "net" 00:04:36.465 Message: lib/meter: Defining dependency "meter" 00:04:36.465 Message: lib/ethdev: Defining dependency "ethdev" 00:04:36.465 Message: lib/pci: Defining dependency "pci" 00:04:36.465 Message: lib/cmdline: Defining dependency "cmdline" 00:04:36.465 Message: lib/hash: Defining dependency "hash" 00:04:36.465 Message: lib/timer: Defining dependency "timer" 00:04:36.465 Message: lib/compressdev: Defining dependency "compressdev" 00:04:36.465 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:36.465 Message: lib/dmadev: Defining dependency "dmadev" 00:04:36.465 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:36.465 Message: lib/power: Defining dependency "power" 00:04:36.465 Message: lib/reorder: Defining dependency "reorder" 00:04:36.465 Message: lib/security: Defining dependency "security" 00:04:36.465 Has header "linux/userfaultfd.h" : YES 00:04:36.465 Has header "linux/vduse.h" : YES 00:04:36.465 Message: lib/vhost: Defining dependency "vhost" 00:04:36.465 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:36.465 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:36.465 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:36.465 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:36.465 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:36.465 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:36.465 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:36.465 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:36.465 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:36.465 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:36.465 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:36.465 Configuring doxy-api-html.conf using configuration 00:04:36.465 Configuring doxy-api-man.conf using configuration 00:04:36.465 Program mandb found: YES (/usr/bin/mandb) 00:04:36.465 Program sphinx-build found: NO 00:04:36.465 Configuring rte_build_config.h using configuration 00:04:36.465 Message: 00:04:36.465 ================= 00:04:36.465 Applications Enabled 00:04:36.465 ================= 00:04:36.465 00:04:36.465 apps: 00:04:36.465 00:04:36.465 00:04:36.465 Message: 00:04:36.465 ================= 00:04:36.465 Libraries Enabled 00:04:36.465 ================= 00:04:36.465 00:04:36.465 libs: 00:04:36.465 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:36.465 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:36.465 cryptodev, dmadev, power, reorder, security, vhost, 00:04:36.465 00:04:36.465 Message: 00:04:36.465 =============== 00:04:36.466 Drivers Enabled 00:04:36.466 =============== 00:04:36.466 00:04:36.466 common: 00:04:36.466 00:04:36.466 bus: 00:04:36.466 pci, vdev, 00:04:36.466 mempool: 00:04:36.466 ring, 00:04:36.466 dma: 00:04:36.466 00:04:36.466 net: 00:04:36.466 00:04:36.466 crypto: 00:04:36.466 00:04:36.466 compress: 00:04:36.466 00:04:36.466 vdpa: 00:04:36.466 00:04:36.466 00:04:36.466 Message: 00:04:36.466 ================= 00:04:36.466 Content Skipped 00:04:36.466 ================= 00:04:36.466 00:04:36.466 apps: 00:04:36.466 dumpcap: explicitly disabled via build config 00:04:36.466 graph: explicitly disabled via build config 00:04:36.466 pdump: explicitly disabled via build config 00:04:36.466 proc-info: explicitly disabled via build config 00:04:36.466 test-acl: explicitly disabled via build config 00:04:36.466 test-bbdev: explicitly disabled via build config 00:04:36.466 test-cmdline: explicitly disabled via build config 00:04:36.466 test-compress-perf: explicitly disabled via build config 00:04:36.466 test-crypto-perf: explicitly disabled via build config 00:04:36.466 test-dma-perf: explicitly disabled via build config 00:04:36.466 test-eventdev: explicitly disabled via build config 00:04:36.466 test-fib: explicitly disabled via build config 00:04:36.466 test-flow-perf: explicitly disabled via build config 00:04:36.466 test-gpudev: explicitly disabled via build config 00:04:36.466 test-mldev: explicitly disabled via build config 00:04:36.466 test-pipeline: explicitly disabled via build config 00:04:36.466 test-pmd: explicitly disabled via build config 00:04:36.466 test-regex: explicitly disabled via build config 00:04:36.466 test-sad: explicitly disabled via build config 00:04:36.466 test-security-perf: explicitly disabled via build config 00:04:36.466 00:04:36.466 libs: 00:04:36.466 argparse: explicitly disabled via build config 00:04:36.466 metrics: explicitly disabled via build config 00:04:36.466 acl: explicitly disabled via build config 00:04:36.466 bbdev: explicitly disabled via build config 00:04:36.466 bitratestats: explicitly disabled via build config 00:04:36.466 bpf: explicitly disabled via build config 00:04:36.466 cfgfile: explicitly disabled via build config 00:04:36.466 distributor: explicitly disabled via build config 00:04:36.466 efd: explicitly disabled via build config 00:04:36.466 eventdev: explicitly disabled via build config 00:04:36.466 dispatcher: explicitly disabled via build config 00:04:36.466 gpudev: explicitly disabled via build config 00:04:36.466 gro: explicitly disabled via build config 00:04:36.466 gso: explicitly disabled via build config 00:04:36.466 ip_frag: explicitly disabled via build config 00:04:36.466 jobstats: explicitly disabled via build config 00:04:36.466 latencystats: explicitly disabled via build config 00:04:36.466 lpm: explicitly disabled via build config 00:04:36.466 member: explicitly disabled via build config 00:04:36.466 pcapng: explicitly disabled via build config 00:04:36.466 rawdev: explicitly disabled via build config 00:04:36.466 regexdev: explicitly disabled via build config 00:04:36.466 mldev: explicitly disabled via build config 00:04:36.466 rib: explicitly disabled via build config 00:04:36.466 sched: explicitly disabled via build config 00:04:36.466 stack: explicitly disabled via build config 00:04:36.466 ipsec: explicitly disabled via build config 00:04:36.466 pdcp: explicitly disabled via build config 00:04:36.466 fib: explicitly disabled via build config 00:04:36.466 port: explicitly disabled via build config 00:04:36.466 pdump: explicitly disabled via build config 00:04:36.466 table: explicitly disabled via build config 00:04:36.466 pipeline: explicitly disabled via build config 00:04:36.466 graph: explicitly disabled via build config 00:04:36.466 node: explicitly disabled via build config 00:04:36.466 00:04:36.466 drivers: 00:04:36.466 common/cpt: not in enabled drivers build config 00:04:36.466 common/dpaax: not in enabled drivers build config 00:04:36.466 common/iavf: not in enabled drivers build config 00:04:36.466 common/idpf: not in enabled drivers build config 00:04:36.466 common/ionic: not in enabled drivers build config 00:04:36.466 common/mvep: not in enabled drivers build config 00:04:36.466 common/octeontx: not in enabled drivers build config 00:04:36.466 bus/auxiliary: not in enabled drivers build config 00:04:36.466 bus/cdx: not in enabled drivers build config 00:04:36.466 bus/dpaa: not in enabled drivers build config 00:04:36.466 bus/fslmc: not in enabled drivers build config 00:04:36.466 bus/ifpga: not in enabled drivers build config 00:04:36.466 bus/platform: not in enabled drivers build config 00:04:36.466 bus/uacce: not in enabled drivers build config 00:04:36.466 bus/vmbus: not in enabled drivers build config 00:04:36.466 common/cnxk: not in enabled drivers build config 00:04:36.466 common/mlx5: not in enabled drivers build config 00:04:36.466 common/nfp: not in enabled drivers build config 00:04:36.466 common/nitrox: not in enabled drivers build config 00:04:36.466 common/qat: not in enabled drivers build config 00:04:36.466 common/sfc_efx: not in enabled drivers build config 00:04:36.466 mempool/bucket: not in enabled drivers build config 00:04:36.466 mempool/cnxk: not in enabled drivers build config 00:04:36.466 mempool/dpaa: not in enabled drivers build config 00:04:36.466 mempool/dpaa2: not in enabled drivers build config 00:04:36.466 mempool/octeontx: not in enabled drivers build config 00:04:36.466 mempool/stack: not in enabled drivers build config 00:04:36.466 dma/cnxk: not in enabled drivers build config 00:04:36.466 dma/dpaa: not in enabled drivers build config 00:04:36.466 dma/dpaa2: not in enabled drivers build config 00:04:36.466 dma/hisilicon: not in enabled drivers build config 00:04:36.466 dma/idxd: not in enabled drivers build config 00:04:36.466 dma/ioat: not in enabled drivers build config 00:04:36.466 dma/skeleton: not in enabled drivers build config 00:04:36.466 net/af_packet: not in enabled drivers build config 00:04:36.466 net/af_xdp: not in enabled drivers build config 00:04:36.466 net/ark: not in enabled drivers build config 00:04:36.466 net/atlantic: not in enabled drivers build config 00:04:36.466 net/avp: not in enabled drivers build config 00:04:36.466 net/axgbe: not in enabled drivers build config 00:04:36.466 net/bnx2x: not in enabled drivers build config 00:04:36.466 net/bnxt: not in enabled drivers build config 00:04:36.466 net/bonding: not in enabled drivers build config 00:04:36.466 net/cnxk: not in enabled drivers build config 00:04:36.466 net/cpfl: not in enabled drivers build config 00:04:36.466 net/cxgbe: not in enabled drivers build config 00:04:36.466 net/dpaa: not in enabled drivers build config 00:04:36.466 net/dpaa2: not in enabled drivers build config 00:04:36.466 net/e1000: not in enabled drivers build config 00:04:36.466 net/ena: not in enabled drivers build config 00:04:36.466 net/enetc: not in enabled drivers build config 00:04:36.466 net/enetfec: not in enabled drivers build config 00:04:36.466 net/enic: not in enabled drivers build config 00:04:36.466 net/failsafe: not in enabled drivers build config 00:04:36.466 net/fm10k: not in enabled drivers build config 00:04:36.466 net/gve: not in enabled drivers build config 00:04:36.466 net/hinic: not in enabled drivers build config 00:04:36.466 net/hns3: not in enabled drivers build config 00:04:36.466 net/i40e: not in enabled drivers build config 00:04:36.466 net/iavf: not in enabled drivers build config 00:04:36.466 net/ice: not in enabled drivers build config 00:04:36.466 net/idpf: not in enabled drivers build config 00:04:36.466 net/igc: not in enabled drivers build config 00:04:36.466 net/ionic: not in enabled drivers build config 00:04:36.466 net/ipn3ke: not in enabled drivers build config 00:04:36.466 net/ixgbe: not in enabled drivers build config 00:04:36.466 net/mana: not in enabled drivers build config 00:04:36.466 net/memif: not in enabled drivers build config 00:04:36.466 net/mlx4: not in enabled drivers build config 00:04:36.466 net/mlx5: not in enabled drivers build config 00:04:36.466 net/mvneta: not in enabled drivers build config 00:04:36.466 net/mvpp2: not in enabled drivers build config 00:04:36.466 net/netvsc: not in enabled drivers build config 00:04:36.466 net/nfb: not in enabled drivers build config 00:04:36.466 net/nfp: not in enabled drivers build config 00:04:36.466 net/ngbe: not in enabled drivers build config 00:04:36.466 net/null: not in enabled drivers build config 00:04:36.466 net/octeontx: not in enabled drivers build config 00:04:36.466 net/octeon_ep: not in enabled drivers build config 00:04:36.466 net/pcap: not in enabled drivers build config 00:04:36.466 net/pfe: not in enabled drivers build config 00:04:36.466 net/qede: not in enabled drivers build config 00:04:36.466 net/ring: not in enabled drivers build config 00:04:36.466 net/sfc: not in enabled drivers build config 00:04:36.466 net/softnic: not in enabled drivers build config 00:04:36.466 net/tap: not in enabled drivers build config 00:04:36.466 net/thunderx: not in enabled drivers build config 00:04:36.466 net/txgbe: not in enabled drivers build config 00:04:36.466 net/vdev_netvsc: not in enabled drivers build config 00:04:36.466 net/vhost: not in enabled drivers build config 00:04:36.466 net/virtio: not in enabled drivers build config 00:04:36.466 net/vmxnet3: not in enabled drivers build config 00:04:36.466 raw/*: missing internal dependency, "rawdev" 00:04:36.466 crypto/armv8: not in enabled drivers build config 00:04:36.466 crypto/bcmfs: not in enabled drivers build config 00:04:36.466 crypto/caam_jr: not in enabled drivers build config 00:04:36.466 crypto/ccp: not in enabled drivers build config 00:04:36.466 crypto/cnxk: not in enabled drivers build config 00:04:36.466 crypto/dpaa_sec: not in enabled drivers build config 00:04:36.466 crypto/dpaa2_sec: not in enabled drivers build config 00:04:36.466 crypto/ipsec_mb: not in enabled drivers build config 00:04:36.466 crypto/mlx5: not in enabled drivers build config 00:04:36.466 crypto/mvsam: not in enabled drivers build config 00:04:36.466 crypto/nitrox: not in enabled drivers build config 00:04:36.466 crypto/null: not in enabled drivers build config 00:04:36.466 crypto/octeontx: not in enabled drivers build config 00:04:36.466 crypto/openssl: not in enabled drivers build config 00:04:36.466 crypto/scheduler: not in enabled drivers build config 00:04:36.466 crypto/uadk: not in enabled drivers build config 00:04:36.466 crypto/virtio: not in enabled drivers build config 00:04:36.466 compress/isal: not in enabled drivers build config 00:04:36.466 compress/mlx5: not in enabled drivers build config 00:04:36.466 compress/nitrox: not in enabled drivers build config 00:04:36.466 compress/octeontx: not in enabled drivers build config 00:04:36.466 compress/zlib: not in enabled drivers build config 00:04:36.466 regex/*: missing internal dependency, "regexdev" 00:04:36.466 ml/*: missing internal dependency, "mldev" 00:04:36.466 vdpa/ifc: not in enabled drivers build config 00:04:36.467 vdpa/mlx5: not in enabled drivers build config 00:04:36.467 vdpa/nfp: not in enabled drivers build config 00:04:36.467 vdpa/sfc: not in enabled drivers build config 00:04:36.467 event/*: missing internal dependency, "eventdev" 00:04:36.467 baseband/*: missing internal dependency, "bbdev" 00:04:36.467 gpu/*: missing internal dependency, "gpudev" 00:04:36.467 00:04:36.467 00:04:36.467 Build targets in project: 85 00:04:36.467 00:04:36.467 DPDK 24.03.0 00:04:36.467 00:04:36.467 User defined options 00:04:36.467 buildtype : debug 00:04:36.467 default_library : shared 00:04:36.467 libdir : lib 00:04:36.467 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:36.467 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:36.467 c_link_args : 00:04:36.467 cpu_instruction_set: native 00:04:36.467 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:04:36.467 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:04:36.467 enable_docs : false 00:04:36.467 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:36.467 enable_kmods : false 00:04:36.467 max_lcores : 128 00:04:36.467 tests : false 00:04:36.467 00:04:36.467 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:37.040 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:37.040 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:37.301 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:37.301 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:37.301 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:37.301 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:37.301 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:37.301 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:37.301 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:37.301 [9/268] Linking static target lib/librte_kvargs.a 00:04:37.301 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:37.301 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:37.301 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:37.301 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:37.301 [14/268] Linking static target lib/librte_log.a 00:04:37.301 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:37.301 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:37.874 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.137 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:38.137 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:38.137 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:38.137 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:38.137 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:38.137 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:38.137 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:38.137 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:38.137 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:38.137 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:38.137 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:38.137 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:38.137 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:38.137 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:38.137 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:38.137 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:38.137 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:38.137 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:38.137 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:38.137 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:38.137 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:38.137 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:38.137 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:38.137 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:38.137 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:38.137 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:38.137 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:38.137 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:38.137 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:38.137 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:38.137 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:38.137 [49/268] Linking static target lib/librte_telemetry.a 00:04:38.137 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:38.137 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:38.137 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:38.137 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:38.137 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:38.399 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:38.399 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:38.399 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:38.399 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:38.399 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:38.399 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:38.399 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:38.399 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:38.399 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:38.399 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.399 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:38.659 [66/268] Linking target lib/librte_log.so.24.1 00:04:38.659 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:38.659 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:38.659 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:38.659 [70/268] Linking static target lib/librte_pci.a 00:04:38.921 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:38.921 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:38.921 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:38.921 [74/268] Linking target lib/librte_kvargs.so.24.1 00:04:38.921 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:38.921 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:38.921 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:38.921 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:38.921 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:38.921 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:39.185 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:39.185 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:39.185 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:39.185 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:39.185 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:39.185 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:39.185 [87/268] Linking static target lib/librte_ring.a 00:04:39.185 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:39.185 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:39.185 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:39.185 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:39.185 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:39.185 [93/268] Linking static target lib/librte_meter.a 00:04:39.185 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:39.185 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:39.185 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:39.185 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:39.185 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:39.185 [99/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:39.185 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:39.185 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:39.185 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:39.185 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:39.185 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.185 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.185 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:39.449 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:39.449 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:39.449 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:39.449 [110/268] Linking target lib/librte_telemetry.so.24.1 00:04:39.449 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:39.449 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:39.449 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:39.449 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:39.449 [115/268] Linking static target lib/librte_eal.a 00:04:39.449 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:39.449 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:39.449 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:39.449 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:39.449 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:39.449 [121/268] Linking static target lib/librte_rcu.a 00:04:39.449 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:39.449 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:39.449 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:39.713 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:39.713 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:39.713 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:39.714 [128/268] Linking static target lib/librte_mempool.a 00:04:39.714 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:39.714 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:39.714 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:39.714 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:39.714 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:39.714 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:39.714 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:39.714 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.714 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.714 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:39.714 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:39.975 [140/268] Linking static target lib/librte_net.a 00:04:39.975 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:39.975 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:39.975 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:39.975 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:40.236 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:40.236 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:40.236 [147/268] Linking static target lib/librte_cmdline.a 00:04:40.236 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.236 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:40.236 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:40.236 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:40.236 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:40.236 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:40.236 [154/268] Linking static target lib/librte_timer.a 00:04:40.236 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:40.236 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:40.236 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:40.236 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.236 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:40.513 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:40.513 [161/268] Linking static target lib/librte_dmadev.a 00:04:40.513 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:40.513 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:40.513 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:40.513 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:40.513 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:40.513 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:40.513 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.794 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:40.794 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:40.794 [171/268] Linking static target lib/librte_power.a 00:04:40.794 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.794 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:40.794 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:40.794 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:40.794 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:40.794 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:40.794 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:40.794 [179/268] Linking static target lib/librte_compressdev.a 00:04:40.794 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:40.794 [181/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:40.794 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:40.794 [183/268] Linking static target lib/librte_reorder.a 00:04:40.794 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:40.794 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:40.794 [186/268] Linking static target lib/librte_hash.a 00:04:40.794 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:40.794 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:41.057 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.057 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:41.057 [191/268] Linking static target lib/librte_mbuf.a 00:04:41.057 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:41.057 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:41.057 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:41.057 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:41.057 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.057 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:41.057 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:41.057 [199/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.316 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:41.316 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.316 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.316 [203/268] Linking static target drivers/librte_bus_vdev.a 00:04:41.316 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:41.316 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.316 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:41.316 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.316 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.316 [209/268] Linking static target lib/librte_security.a 00:04:41.317 [210/268] Linking static target drivers/librte_bus_pci.a 00:04:41.317 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:41.317 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:41.317 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:41.317 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:41.317 [215/268] Linking static target drivers/librte_mempool_ring.a 00:04:41.317 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.317 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:41.317 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.576 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.576 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.576 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.835 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.835 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:41.835 [224/268] Linking static target lib/librte_cryptodev.a 00:04:42.094 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:42.094 [226/268] Linking static target lib/librte_ethdev.a 00:04:43.472 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.381 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:47.290 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.290 [230/268] Linking target lib/librte_eal.so.24.1 00:04:47.549 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:47.549 [232/268] Linking target lib/librte_pci.so.24.1 00:04:47.549 [233/268] Linking target lib/librte_dmadev.so.24.1 00:04:47.549 [234/268] Linking target lib/librte_meter.so.24.1 00:04:47.549 [235/268] Linking target lib/librte_ring.so.24.1 00:04:47.549 [236/268] Linking target lib/librte_timer.so.24.1 00:04:47.549 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:47.808 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:47.808 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:47.808 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:47.808 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:47.808 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:47.808 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:47.808 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:47.808 [245/268] Linking target lib/librte_rcu.so.24.1 00:04:48.067 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:48.067 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:48.067 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:48.067 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:48.328 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.328 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:48.328 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:48.328 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:04:48.587 [254/268] Linking target lib/librte_reorder.so.24.1 00:04:48.587 [255/268] Linking target lib/librte_net.so.24.1 00:04:48.587 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:48.587 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:48.847 [258/268] Linking target lib/librte_security.so.24.1 00:04:48.847 [259/268] Linking target lib/librte_hash.so.24.1 00:04:48.847 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:48.847 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:48.847 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:48.847 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:49.107 [264/268] Linking target lib/librte_power.so.24.1 00:04:59.098 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:59.098 [266/268] Linking static target lib/librte_vhost.a 00:04:59.357 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.357 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:59.357 INFO: autodetecting backend as ninja 00:04:59.357 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:05:38.116 CC lib/log/log.o 00:05:38.116 CC lib/log/log_flags.o 00:05:38.116 CC lib/log/log_deprecated.o 00:05:38.116 CC lib/ut/ut.o 00:05:38.116 CC lib/ut_mock/mock.o 00:05:38.116 LIB libspdk_log.a 00:05:38.116 LIB libspdk_ut.a 00:05:38.116 LIB libspdk_ut_mock.a 00:05:38.116 SO libspdk_ut.so.2.0 00:05:38.116 SO libspdk_log.so.7.0 00:05:38.116 SO libspdk_ut_mock.so.6.0 00:05:38.116 SYMLINK libspdk_ut.so 00:05:38.116 SYMLINK libspdk_ut_mock.so 00:05:38.116 SYMLINK libspdk_log.so 00:05:38.116 CC lib/dma/dma.o 00:05:38.116 CC lib/ioat/ioat.o 00:05:38.116 CXX lib/trace_parser/trace.o 00:05:38.116 CC lib/util/base64.o 00:05:38.116 CC lib/util/bit_array.o 00:05:38.116 CC lib/util/cpuset.o 00:05:38.116 CC lib/util/crc16.o 00:05:38.116 CC lib/util/crc32.o 00:05:38.116 CC lib/util/crc32c.o 00:05:38.116 CC lib/util/crc32_ieee.o 00:05:38.116 CC lib/util/crc64.o 00:05:38.116 CC lib/util/dif.o 00:05:38.116 CC lib/util/fd.o 00:05:38.116 CC lib/util/fd_group.o 00:05:38.116 CC lib/util/file.o 00:05:38.116 CC lib/util/hexlify.o 00:05:38.116 CC lib/util/iov.o 00:05:38.116 CC lib/util/math.o 00:05:38.116 CC lib/util/net.o 00:05:38.116 CC lib/util/pipe.o 00:05:38.116 CC lib/util/strerror_tls.o 00:05:38.116 CC lib/util/uuid.o 00:05:38.116 CC lib/util/string.o 00:05:38.116 CC lib/util/xor.o 00:05:38.116 CC lib/util/zipf.o 00:05:38.116 CC lib/util/md5.o 00:05:38.116 CC lib/vfio_user/host/vfio_user_pci.o 00:05:38.116 CC lib/vfio_user/host/vfio_user.o 00:05:38.116 LIB libspdk_dma.a 00:05:38.116 SO libspdk_dma.so.5.0 00:05:38.116 SYMLINK libspdk_dma.so 00:05:38.116 LIB libspdk_vfio_user.a 00:05:38.116 SO libspdk_vfio_user.so.5.0 00:05:38.116 LIB libspdk_ioat.a 00:05:38.116 SO libspdk_ioat.so.7.0 00:05:38.116 SYMLINK libspdk_vfio_user.so 00:05:38.116 LIB libspdk_util.a 00:05:38.116 SYMLINK libspdk_ioat.so 00:05:38.116 SO libspdk_util.so.10.1 00:05:38.116 SYMLINK libspdk_util.so 00:05:38.116 CC lib/idxd/idxd.o 00:05:38.116 CC lib/idxd/idxd_user.o 00:05:38.116 CC lib/idxd/idxd_kernel.o 00:05:38.116 CC lib/conf/conf.o 00:05:38.116 CC lib/vmd/led.o 00:05:38.116 CC lib/vmd/vmd.o 00:05:38.116 CC lib/json/json_parse.o 00:05:38.116 CC lib/json/json_util.o 00:05:38.116 CC lib/json/json_write.o 00:05:38.116 CC lib/rdma_provider/common.o 00:05:38.116 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:38.116 CC lib/rdma_utils/rdma_utils.o 00:05:38.116 CC lib/env_dpdk/env.o 00:05:38.116 CC lib/env_dpdk/memory.o 00:05:38.116 CC lib/env_dpdk/pci.o 00:05:38.116 CC lib/env_dpdk/init.o 00:05:38.116 CC lib/env_dpdk/threads.o 00:05:38.116 CC lib/env_dpdk/pci_ioat.o 00:05:38.116 CC lib/env_dpdk/pci_virtio.o 00:05:38.116 CC lib/env_dpdk/pci_vmd.o 00:05:38.116 CC lib/env_dpdk/pci_idxd.o 00:05:38.116 CC lib/env_dpdk/pci_event.o 00:05:38.116 CC lib/env_dpdk/sigbus_handler.o 00:05:38.116 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:38.116 CC lib/env_dpdk/pci_dpdk.o 00:05:38.116 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:38.116 LIB libspdk_trace_parser.a 00:05:38.116 SO libspdk_trace_parser.so.6.0 00:05:38.116 LIB libspdk_rdma_provider.a 00:05:38.116 LIB libspdk_conf.a 00:05:38.116 SYMLINK libspdk_trace_parser.so 00:05:38.116 SO libspdk_rdma_provider.so.6.0 00:05:38.116 SO libspdk_conf.so.6.0 00:05:38.116 SYMLINK libspdk_rdma_provider.so 00:05:38.116 SYMLINK libspdk_conf.so 00:05:38.116 LIB libspdk_rdma_utils.a 00:05:38.116 SO libspdk_rdma_utils.so.1.0 00:05:38.116 LIB libspdk_json.a 00:05:38.116 SO libspdk_json.so.6.0 00:05:38.116 SYMLINK libspdk_rdma_utils.so 00:05:38.116 SYMLINK libspdk_json.so 00:05:38.116 CC lib/jsonrpc/jsonrpc_server.o 00:05:38.116 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:38.116 CC lib/jsonrpc/jsonrpc_client.o 00:05:38.116 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:38.116 LIB libspdk_idxd.a 00:05:38.116 SO libspdk_idxd.so.12.1 00:05:38.116 SYMLINK libspdk_idxd.so 00:05:38.116 LIB libspdk_vmd.a 00:05:38.116 SO libspdk_vmd.so.6.0 00:05:38.116 SYMLINK libspdk_vmd.so 00:05:38.116 LIB libspdk_jsonrpc.a 00:05:38.116 SO libspdk_jsonrpc.so.6.0 00:05:38.116 SYMLINK libspdk_jsonrpc.so 00:05:38.374 CC lib/rpc/rpc.o 00:05:38.633 LIB libspdk_rpc.a 00:05:38.633 SO libspdk_rpc.so.6.0 00:05:38.633 SYMLINK libspdk_rpc.so 00:05:38.891 CC lib/notify/notify.o 00:05:38.891 CC lib/notify/notify_rpc.o 00:05:38.891 CC lib/keyring/keyring.o 00:05:38.891 CC lib/keyring/keyring_rpc.o 00:05:38.891 CC lib/trace/trace.o 00:05:38.891 CC lib/trace/trace_flags.o 00:05:38.891 CC lib/trace/trace_rpc.o 00:05:39.149 LIB libspdk_notify.a 00:05:39.149 SO libspdk_notify.so.6.0 00:05:39.149 SYMLINK libspdk_notify.so 00:05:39.149 LIB libspdk_keyring.a 00:05:39.410 SO libspdk_keyring.so.2.0 00:05:39.410 LIB libspdk_trace.a 00:05:39.410 SO libspdk_trace.so.11.0 00:05:39.410 SYMLINK libspdk_keyring.so 00:05:39.410 SYMLINK libspdk_trace.so 00:05:39.410 LIB libspdk_env_dpdk.a 00:05:39.668 SO libspdk_env_dpdk.so.15.1 00:05:39.668 CC lib/sock/sock.o 00:05:39.668 CC lib/sock/sock_rpc.o 00:05:39.668 CC lib/thread/thread.o 00:05:39.668 CC lib/thread/iobuf.o 00:05:39.668 SYMLINK libspdk_env_dpdk.so 00:05:40.236 LIB libspdk_sock.a 00:05:40.236 SO libspdk_sock.so.10.0 00:05:40.495 SYMLINK libspdk_sock.so 00:05:40.754 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:40.754 CC lib/nvme/nvme_ctrlr.o 00:05:40.754 CC lib/nvme/nvme_fabric.o 00:05:40.754 CC lib/nvme/nvme_ns_cmd.o 00:05:40.754 CC lib/nvme/nvme_ns.o 00:05:40.754 CC lib/nvme/nvme_pcie_common.o 00:05:40.754 CC lib/nvme/nvme_pcie.o 00:05:40.754 CC lib/nvme/nvme_qpair.o 00:05:40.754 CC lib/nvme/nvme.o 00:05:40.754 CC lib/nvme/nvme_quirks.o 00:05:40.754 CC lib/nvme/nvme_transport.o 00:05:40.754 CC lib/nvme/nvme_discovery.o 00:05:40.754 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:40.754 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:40.754 CC lib/nvme/nvme_tcp.o 00:05:40.754 CC lib/nvme/nvme_opal.o 00:05:40.754 CC lib/nvme/nvme_io_msg.o 00:05:40.754 CC lib/nvme/nvme_poll_group.o 00:05:40.754 CC lib/nvme/nvme_zns.o 00:05:40.754 CC lib/nvme/nvme_stubs.o 00:05:40.754 CC lib/nvme/nvme_auth.o 00:05:40.754 CC lib/nvme/nvme_cuse.o 00:05:40.754 CC lib/nvme/nvme_vfio_user.o 00:05:40.754 CC lib/nvme/nvme_rdma.o 00:05:41.690 LIB libspdk_thread.a 00:05:41.690 SO libspdk_thread.so.10.2 00:05:41.690 SYMLINK libspdk_thread.so 00:05:41.948 CC lib/fsdev/fsdev.o 00:05:41.948 CC lib/accel/accel.o 00:05:41.948 CC lib/vfu_tgt/tgt_endpoint.o 00:05:41.948 CC lib/vfu_tgt/tgt_rpc.o 00:05:41.948 CC lib/virtio/virtio.o 00:05:41.949 CC lib/fsdev/fsdev_io.o 00:05:41.949 CC lib/accel/accel_rpc.o 00:05:41.949 CC lib/accel/accel_sw.o 00:05:41.949 CC lib/blob/blobstore.o 00:05:41.949 CC lib/virtio/virtio_vhost_user.o 00:05:41.949 CC lib/init/subsystem.o 00:05:41.949 CC lib/blob/request.o 00:05:41.949 CC lib/init/json_config.o 00:05:41.949 CC lib/fsdev/fsdev_rpc.o 00:05:41.949 CC lib/virtio/virtio_vfio_user.o 00:05:41.949 CC lib/init/subsystem_rpc.o 00:05:41.949 CC lib/virtio/virtio_pci.o 00:05:41.949 CC lib/blob/zeroes.o 00:05:41.949 CC lib/blob/blob_bs_dev.o 00:05:41.949 CC lib/init/rpc.o 00:05:42.208 LIB libspdk_init.a 00:05:42.208 SO libspdk_init.so.6.0 00:05:42.208 LIB libspdk_virtio.a 00:05:42.208 LIB libspdk_vfu_tgt.a 00:05:42.208 SO libspdk_virtio.so.7.0 00:05:42.208 SYMLINK libspdk_init.so 00:05:42.208 SO libspdk_vfu_tgt.so.3.0 00:05:42.469 SYMLINK libspdk_virtio.so 00:05:42.469 SYMLINK libspdk_vfu_tgt.so 00:05:42.469 CC lib/event/app.o 00:05:42.469 CC lib/event/log_rpc.o 00:05:42.469 CC lib/event/app_rpc.o 00:05:42.469 CC lib/event/scheduler_static.o 00:05:42.469 CC lib/event/reactor.o 00:05:43.039 LIB libspdk_event.a 00:05:43.039 SO libspdk_event.so.15.0 00:05:43.039 SYMLINK libspdk_event.so 00:05:43.039 LIB libspdk_fsdev.a 00:05:43.039 SO libspdk_fsdev.so.1.0 00:05:43.299 SYMLINK libspdk_fsdev.so 00:05:43.559 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:44.131 LIB libspdk_accel.a 00:05:44.131 SO libspdk_accel.so.16.0 00:05:44.131 SYMLINK libspdk_accel.so 00:05:44.131 LIB libspdk_fuse_dispatcher.a 00:05:44.392 SO libspdk_fuse_dispatcher.so.1.0 00:05:44.392 SYMLINK libspdk_fuse_dispatcher.so 00:05:44.392 CC lib/bdev/bdev_rpc.o 00:05:44.393 CC lib/bdev/bdev.o 00:05:44.393 CC lib/bdev/bdev_zone.o 00:05:44.393 CC lib/bdev/scsi_nvme.o 00:05:44.393 CC lib/bdev/part.o 00:05:44.654 LIB libspdk_nvme.a 00:05:44.654 SO libspdk_nvme.so.15.0 00:05:45.224 SYMLINK libspdk_nvme.so 00:05:45.485 LIB libspdk_blob.a 00:05:45.485 SO libspdk_blob.so.11.0 00:05:45.485 SYMLINK libspdk_blob.so 00:05:45.745 CC lib/blobfs/blobfs.o 00:05:45.745 CC lib/blobfs/tree.o 00:05:46.006 CC lib/lvol/lvol.o 00:05:46.576 LIB libspdk_blobfs.a 00:05:46.576 SO libspdk_blobfs.so.10.0 00:05:46.576 SYMLINK libspdk_blobfs.so 00:05:47.960 LIB libspdk_bdev.a 00:05:47.960 SO libspdk_bdev.so.17.0 00:05:47.960 SYMLINK libspdk_bdev.so 00:05:48.223 LIB libspdk_lvol.a 00:05:48.223 SO libspdk_lvol.so.10.0 00:05:48.223 SYMLINK libspdk_lvol.so 00:05:48.223 CC lib/nbd/nbd.o 00:05:48.223 CC lib/nbd/nbd_rpc.o 00:05:48.223 CC lib/ublk/ublk.o 00:05:48.223 CC lib/ublk/ublk_rpc.o 00:05:48.223 CC lib/ftl/ftl_init.o 00:05:48.223 CC lib/ftl/ftl_core.o 00:05:48.223 CC lib/ftl/ftl_debug.o 00:05:48.223 CC lib/ftl/ftl_io.o 00:05:48.223 CC lib/ftl/ftl_layout.o 00:05:48.223 CC lib/ftl/ftl_sb.o 00:05:48.223 CC lib/nvmf/ctrlr.o 00:05:48.223 CC lib/ftl/ftl_l2p.o 00:05:48.223 CC lib/nvmf/ctrlr_discovery.o 00:05:48.223 CC lib/ftl/ftl_nv_cache.o 00:05:48.223 CC lib/ftl/ftl_l2p_flat.o 00:05:48.223 CC lib/ftl/ftl_band.o 00:05:48.223 CC lib/nvmf/subsystem.o 00:05:48.223 CC lib/nvmf/ctrlr_bdev.o 00:05:48.223 CC lib/ftl/ftl_band_ops.o 00:05:48.223 CC lib/nvmf/nvmf.o 00:05:48.223 CC lib/ftl/ftl_writer.o 00:05:48.223 CC lib/nvmf/nvmf_rpc.o 00:05:48.223 CC lib/ftl/ftl_rq.o 00:05:48.223 CC lib/ftl/ftl_reloc.o 00:05:48.223 CC lib/nvmf/tcp.o 00:05:48.223 CC lib/nvmf/transport.o 00:05:48.223 CC lib/ftl/ftl_l2p_cache.o 00:05:48.223 CC lib/ftl/ftl_p2l.o 00:05:48.223 CC lib/ftl/ftl_p2l_log.o 00:05:48.223 CC lib/nvmf/stubs.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt.o 00:05:48.223 CC lib/nvmf/mdns_server.o 00:05:48.223 CC lib/nvmf/vfio_user.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:48.223 CC lib/nvmf/rdma.o 00:05:48.223 CC lib/nvmf/auth.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:48.223 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:48.223 CC lib/scsi/dev.o 00:05:48.793 CC lib/scsi/lun.o 00:05:48.793 CC lib/scsi/port.o 00:05:48.793 CC lib/ftl/utils/ftl_conf.o 00:05:48.793 CC lib/scsi/scsi.o 00:05:48.793 CC lib/ftl/utils/ftl_md.o 00:05:48.793 CC lib/scsi/scsi_bdev.o 00:05:48.793 CC lib/ftl/utils/ftl_mempool.o 00:05:48.793 CC lib/scsi/scsi_pr.o 00:05:48.793 CC lib/ftl/utils/ftl_bitmap.o 00:05:48.793 CC lib/scsi/scsi_rpc.o 00:05:48.793 CC lib/ftl/utils/ftl_property.o 00:05:48.793 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:48.793 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:48.793 CC lib/scsi/task.o 00:05:48.793 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:48.793 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:48.793 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:48.793 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:49.054 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:49.054 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:49.054 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:49.054 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:49.054 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:49.054 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:49.054 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:49.054 CC lib/ftl/base/ftl_base_dev.o 00:05:49.054 CC lib/ftl/base/ftl_base_bdev.o 00:05:49.054 CC lib/ftl/ftl_trace.o 00:05:49.312 LIB libspdk_nbd.a 00:05:49.312 SO libspdk_nbd.so.7.0 00:05:49.312 SYMLINK libspdk_nbd.so 00:05:49.571 LIB libspdk_ublk.a 00:05:49.571 SO libspdk_ublk.so.3.0 00:05:49.571 SYMLINK libspdk_ublk.so 00:05:49.571 LIB libspdk_scsi.a 00:05:49.571 SO libspdk_scsi.so.9.0 00:05:49.829 SYMLINK libspdk_scsi.so 00:05:49.829 LIB libspdk_ftl.a 00:05:49.829 CC lib/vhost/vhost.o 00:05:49.829 CC lib/vhost/vhost_rpc.o 00:05:49.829 CC lib/vhost/vhost_scsi.o 00:05:49.829 CC lib/vhost/vhost_blk.o 00:05:49.829 CC lib/vhost/rte_vhost_user.o 00:05:49.829 CC lib/iscsi/conn.o 00:05:49.829 CC lib/iscsi/init_grp.o 00:05:49.829 CC lib/iscsi/iscsi.o 00:05:49.829 CC lib/iscsi/param.o 00:05:49.829 CC lib/iscsi/portal_grp.o 00:05:49.829 CC lib/iscsi/tgt_node.o 00:05:49.829 CC lib/iscsi/iscsi_subsystem.o 00:05:49.829 CC lib/iscsi/iscsi_rpc.o 00:05:49.829 CC lib/iscsi/task.o 00:05:50.089 SO libspdk_ftl.so.9.0 00:05:50.379 SYMLINK libspdk_ftl.so 00:05:50.980 LIB libspdk_nvmf.a 00:05:51.240 SO libspdk_nvmf.so.19.0 00:05:51.499 SYMLINK libspdk_nvmf.so 00:05:52.065 LIB libspdk_iscsi.a 00:05:52.065 SO libspdk_iscsi.so.8.0 00:05:52.324 LIB libspdk_vhost.a 00:05:52.324 SO libspdk_vhost.so.8.0 00:05:52.324 SYMLINK libspdk_vhost.so 00:05:52.324 SYMLINK libspdk_iscsi.so 00:05:52.892 CC module/env_dpdk/env_dpdk_rpc.o 00:05:52.892 CC module/vfu_device/vfu_virtio.o 00:05:52.892 CC module/vfu_device/vfu_virtio_blk.o 00:05:52.892 CC module/vfu_device/vfu_virtio_scsi.o 00:05:52.892 CC module/vfu_device/vfu_virtio_rpc.o 00:05:52.892 CC module/vfu_device/vfu_virtio_fs.o 00:05:52.892 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:52.892 CC module/scheduler/gscheduler/gscheduler.o 00:05:52.892 CC module/sock/posix/posix.o 00:05:52.892 CC module/keyring/linux/keyring.o 00:05:52.892 CC module/keyring/linux/keyring_rpc.o 00:05:52.892 CC module/keyring/file/keyring.o 00:05:52.892 CC module/keyring/file/keyring_rpc.o 00:05:52.892 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:52.892 CC module/accel/iaa/accel_iaa.o 00:05:52.892 CC module/accel/iaa/accel_iaa_rpc.o 00:05:52.892 CC module/fsdev/aio/fsdev_aio.o 00:05:52.892 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:52.892 CC module/fsdev/aio/linux_aio_mgr.o 00:05:52.892 CC module/accel/error/accel_error.o 00:05:52.892 CC module/accel/error/accel_error_rpc.o 00:05:52.892 CC module/accel/ioat/accel_ioat.o 00:05:52.892 CC module/accel/ioat/accel_ioat_rpc.o 00:05:52.892 CC module/accel/dsa/accel_dsa.o 00:05:52.892 CC module/accel/dsa/accel_dsa_rpc.o 00:05:52.892 CC module/blob/bdev/blob_bdev.o 00:05:52.892 LIB libspdk_env_dpdk_rpc.a 00:05:52.892 SO libspdk_env_dpdk_rpc.so.6.0 00:05:53.151 LIB libspdk_scheduler_dpdk_governor.a 00:05:53.151 SYMLINK libspdk_env_dpdk_rpc.so 00:05:53.151 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:53.151 LIB libspdk_scheduler_gscheduler.a 00:05:53.151 SO libspdk_scheduler_gscheduler.so.4.0 00:05:53.151 LIB libspdk_accel_error.a 00:05:53.151 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:53.151 LIB libspdk_scheduler_dynamic.a 00:05:53.151 LIB libspdk_accel_iaa.a 00:05:53.151 LIB libspdk_accel_ioat.a 00:05:53.151 LIB libspdk_keyring_file.a 00:05:53.151 LIB libspdk_keyring_linux.a 00:05:53.151 SO libspdk_accel_error.so.2.0 00:05:53.151 SYMLINK libspdk_scheduler_gscheduler.so 00:05:53.151 SO libspdk_scheduler_dynamic.so.4.0 00:05:53.151 SO libspdk_accel_iaa.so.3.0 00:05:53.151 SO libspdk_keyring_file.so.2.0 00:05:53.151 SO libspdk_accel_ioat.so.6.0 00:05:53.151 SO libspdk_keyring_linux.so.1.0 00:05:53.151 SYMLINK libspdk_accel_error.so 00:05:53.151 SYMLINK libspdk_scheduler_dynamic.so 00:05:53.151 SYMLINK libspdk_accel_ioat.so 00:05:53.151 SYMLINK libspdk_keyring_linux.so 00:05:53.151 LIB libspdk_accel_dsa.a 00:05:53.151 SYMLINK libspdk_accel_iaa.so 00:05:53.151 SYMLINK libspdk_keyring_file.so 00:05:53.151 SO libspdk_accel_dsa.so.5.0 00:05:53.410 SYMLINK libspdk_accel_dsa.so 00:05:53.410 LIB libspdk_blob_bdev.a 00:05:53.410 SO libspdk_blob_bdev.so.11.0 00:05:53.410 SYMLINK libspdk_blob_bdev.so 00:05:53.410 LIB libspdk_vfu_device.a 00:05:53.676 SO libspdk_vfu_device.so.3.0 00:05:53.676 LIB libspdk_fsdev_aio.a 00:05:53.676 SO libspdk_fsdev_aio.so.1.0 00:05:53.676 SYMLINK libspdk_vfu_device.so 00:05:53.676 SYMLINK libspdk_fsdev_aio.so 00:05:53.676 CC module/blobfs/bdev/blobfs_bdev.o 00:05:53.676 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:53.676 CC module/bdev/gpt/vbdev_gpt.o 00:05:53.677 CC module/bdev/error/vbdev_error.o 00:05:53.677 CC module/bdev/gpt/gpt.o 00:05:53.677 CC module/bdev/error/vbdev_error_rpc.o 00:05:53.677 CC module/bdev/delay/vbdev_delay.o 00:05:53.677 CC module/bdev/lvol/vbdev_lvol.o 00:05:53.677 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:53.677 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:53.677 CC module/bdev/passthru/vbdev_passthru.o 00:05:53.677 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:53.677 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:53.677 CC module/bdev/ftl/bdev_ftl.o 00:05:53.677 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:53.677 CC module/bdev/nvme/bdev_nvme.o 00:05:53.677 CC module/bdev/iscsi/bdev_iscsi.o 00:05:53.677 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:53.677 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:53.677 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:53.677 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:53.677 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:53.677 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:53.677 CC module/bdev/nvme/nvme_rpc.o 00:05:53.677 CC module/bdev/raid/bdev_raid.o 00:05:53.677 CC module/bdev/nvme/bdev_mdns_client.o 00:05:53.677 CC module/bdev/nvme/vbdev_opal.o 00:05:53.677 CC module/bdev/raid/bdev_raid_rpc.o 00:05:53.677 CC module/bdev/aio/bdev_aio.o 00:05:53.677 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:53.677 CC module/bdev/raid/bdev_raid_sb.o 00:05:53.677 CC module/bdev/aio/bdev_aio_rpc.o 00:05:53.677 CC module/bdev/raid/raid0.o 00:05:53.677 CC module/bdev/raid/raid1.o 00:05:53.677 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:53.677 CC module/bdev/raid/concat.o 00:05:53.677 CC module/bdev/malloc/bdev_malloc.o 00:05:53.677 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:53.677 CC module/bdev/null/bdev_null.o 00:05:53.677 CC module/bdev/split/vbdev_split.o 00:05:53.677 CC module/bdev/null/bdev_null_rpc.o 00:05:53.677 CC module/bdev/split/vbdev_split_rpc.o 00:05:54.244 LIB libspdk_sock_posix.a 00:05:54.244 SO libspdk_sock_posix.so.6.0 00:05:54.244 LIB libspdk_blobfs_bdev.a 00:05:54.244 SO libspdk_blobfs_bdev.so.6.0 00:05:54.244 LIB libspdk_bdev_error.a 00:05:54.244 SO libspdk_bdev_error.so.6.0 00:05:54.244 LIB libspdk_bdev_null.a 00:05:54.244 SYMLINK libspdk_sock_posix.so 00:05:54.244 LIB libspdk_bdev_aio.a 00:05:54.244 SYMLINK libspdk_blobfs_bdev.so 00:05:54.244 LIB libspdk_bdev_split.a 00:05:54.244 SO libspdk_bdev_null.so.6.0 00:05:54.244 SO libspdk_bdev_split.so.6.0 00:05:54.244 SO libspdk_bdev_aio.so.6.0 00:05:54.244 LIB libspdk_bdev_gpt.a 00:05:54.244 SYMLINK libspdk_bdev_error.so 00:05:54.244 SO libspdk_bdev_gpt.so.6.0 00:05:54.244 LIB libspdk_bdev_ftl.a 00:05:54.244 SYMLINK libspdk_bdev_null.so 00:05:54.244 SYMLINK libspdk_bdev_split.so 00:05:54.244 SYMLINK libspdk_bdev_aio.so 00:05:54.244 SO libspdk_bdev_ftl.so.6.0 00:05:54.244 SYMLINK libspdk_bdev_gpt.so 00:05:54.244 LIB libspdk_bdev_malloc.a 00:05:54.503 LIB libspdk_bdev_zone_block.a 00:05:54.503 LIB libspdk_bdev_passthru.a 00:05:54.503 SO libspdk_bdev_malloc.so.6.0 00:05:54.503 SYMLINK libspdk_bdev_ftl.so 00:05:54.503 SO libspdk_bdev_passthru.so.6.0 00:05:54.503 SO libspdk_bdev_zone_block.so.6.0 00:05:54.503 LIB libspdk_bdev_iscsi.a 00:05:54.503 SYMLINK libspdk_bdev_malloc.so 00:05:54.503 SO libspdk_bdev_iscsi.so.6.0 00:05:54.503 SYMLINK libspdk_bdev_passthru.so 00:05:54.503 SYMLINK libspdk_bdev_zone_block.so 00:05:54.503 LIB libspdk_bdev_delay.a 00:05:54.503 SO libspdk_bdev_delay.so.6.0 00:05:54.503 SYMLINK libspdk_bdev_iscsi.so 00:05:54.503 SYMLINK libspdk_bdev_delay.so 00:05:54.503 LIB libspdk_bdev_lvol.a 00:05:54.503 SO libspdk_bdev_lvol.so.6.0 00:05:54.503 LIB libspdk_bdev_virtio.a 00:05:54.762 SO libspdk_bdev_virtio.so.6.0 00:05:54.762 SYMLINK libspdk_bdev_lvol.so 00:05:54.762 SYMLINK libspdk_bdev_virtio.so 00:05:55.329 LIB libspdk_bdev_raid.a 00:05:55.329 SO libspdk_bdev_raid.so.6.0 00:05:55.648 SYMLINK libspdk_bdev_raid.so 00:05:57.556 LIB libspdk_bdev_nvme.a 00:05:57.556 SO libspdk_bdev_nvme.so.7.0 00:05:57.556 SYMLINK libspdk_bdev_nvme.so 00:05:57.815 CC module/event/subsystems/iobuf/iobuf.o 00:05:57.815 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:58.074 CC module/event/subsystems/keyring/keyring.o 00:05:58.074 CC module/event/subsystems/sock/sock.o 00:05:58.074 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:58.074 CC module/event/subsystems/vmd/vmd.o 00:05:58.074 CC module/event/subsystems/scheduler/scheduler.o 00:05:58.074 CC module/event/subsystems/fsdev/fsdev.o 00:05:58.074 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:58.074 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:58.074 LIB libspdk_event_sock.a 00:05:58.074 LIB libspdk_event_iobuf.a 00:05:58.074 LIB libspdk_event_fsdev.a 00:05:58.074 LIB libspdk_event_vhost_blk.a 00:05:58.074 LIB libspdk_event_keyring.a 00:05:58.074 SO libspdk_event_sock.so.5.0 00:05:58.074 SO libspdk_event_fsdev.so.1.0 00:05:58.074 SO libspdk_event_iobuf.so.3.0 00:05:58.074 SO libspdk_event_vhost_blk.so.3.0 00:05:58.074 SO libspdk_event_keyring.so.1.0 00:05:58.074 SYMLINK libspdk_event_sock.so 00:05:58.074 SYMLINK libspdk_event_fsdev.so 00:05:58.074 SYMLINK libspdk_event_vhost_blk.so 00:05:58.333 SYMLINK libspdk_event_iobuf.so 00:05:58.333 SYMLINK libspdk_event_keyring.so 00:05:58.334 LIB libspdk_event_vmd.a 00:05:58.334 LIB libspdk_event_scheduler.a 00:05:58.334 LIB libspdk_event_vfu_tgt.a 00:05:58.334 SO libspdk_event_vmd.so.6.0 00:05:58.334 SO libspdk_event_scheduler.so.4.0 00:05:58.334 SO libspdk_event_vfu_tgt.so.3.0 00:05:58.334 SYMLINK libspdk_event_scheduler.so 00:05:58.334 SYMLINK libspdk_event_vfu_tgt.so 00:05:58.334 SYMLINK libspdk_event_vmd.so 00:05:58.334 CC module/event/subsystems/accel/accel.o 00:05:58.905 LIB libspdk_event_accel.a 00:05:58.905 SO libspdk_event_accel.so.6.0 00:05:58.905 SYMLINK libspdk_event_accel.so 00:05:59.166 CC module/event/subsystems/bdev/bdev.o 00:05:59.425 LIB libspdk_event_bdev.a 00:05:59.425 SO libspdk_event_bdev.so.6.0 00:05:59.425 SYMLINK libspdk_event_bdev.so 00:05:59.685 CC module/event/subsystems/ublk/ublk.o 00:05:59.685 CC module/event/subsystems/nbd/nbd.o 00:05:59.685 CC module/event/subsystems/scsi/scsi.o 00:05:59.685 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:59.685 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:59.945 LIB libspdk_event_nbd.a 00:05:59.945 LIB libspdk_event_scsi.a 00:05:59.945 SO libspdk_event_nbd.so.6.0 00:05:59.945 SO libspdk_event_scsi.so.6.0 00:05:59.945 SYMLINK libspdk_event_nbd.so 00:05:59.945 SYMLINK libspdk_event_scsi.so 00:05:59.945 LIB libspdk_event_ublk.a 00:05:59.945 SO libspdk_event_ublk.so.3.0 00:05:59.945 SYMLINK libspdk_event_ublk.so 00:06:00.204 LIB libspdk_event_nvmf.a 00:06:00.204 SO libspdk_event_nvmf.so.6.0 00:06:00.204 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:00.204 CC module/event/subsystems/iscsi/iscsi.o 00:06:00.204 SYMLINK libspdk_event_nvmf.so 00:06:00.464 LIB libspdk_event_vhost_scsi.a 00:06:00.464 SO libspdk_event_vhost_scsi.so.3.0 00:06:00.464 LIB libspdk_event_iscsi.a 00:06:00.464 SYMLINK libspdk_event_vhost_scsi.so 00:06:00.464 SO libspdk_event_iscsi.so.6.0 00:06:00.464 SYMLINK libspdk_event_iscsi.so 00:06:00.723 SO libspdk.so.6.0 00:06:00.723 SYMLINK libspdk.so 00:06:00.987 CC app/trace_record/trace_record.o 00:06:00.987 CC app/spdk_nvme_identify/identify.o 00:06:00.987 CXX app/trace/trace.o 00:06:00.987 CC app/spdk_lspci/spdk_lspci.o 00:06:00.987 CC app/spdk_top/spdk_top.o 00:06:00.987 CC app/spdk_nvme_perf/perf.o 00:06:00.987 CC app/spdk_nvme_discover/discovery_aer.o 00:06:00.987 CC test/rpc_client/rpc_client_test.o 00:06:00.987 TEST_HEADER include/spdk/accel_module.h 00:06:00.987 TEST_HEADER include/spdk/accel.h 00:06:00.987 TEST_HEADER include/spdk/assert.h 00:06:00.987 TEST_HEADER include/spdk/barrier.h 00:06:00.987 TEST_HEADER include/spdk/bdev.h 00:06:00.987 TEST_HEADER include/spdk/base64.h 00:06:00.987 TEST_HEADER include/spdk/bdev_module.h 00:06:00.987 TEST_HEADER include/spdk/bdev_zone.h 00:06:00.987 TEST_HEADER include/spdk/bit_array.h 00:06:00.987 TEST_HEADER include/spdk/bit_pool.h 00:06:00.987 TEST_HEADER include/spdk/blob_bdev.h 00:06:00.987 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:00.987 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:00.987 TEST_HEADER include/spdk/blobfs.h 00:06:00.987 TEST_HEADER include/spdk/blob.h 00:06:00.987 TEST_HEADER include/spdk/conf.h 00:06:00.987 TEST_HEADER include/spdk/config.h 00:06:00.987 TEST_HEADER include/spdk/cpuset.h 00:06:00.987 TEST_HEADER include/spdk/crc16.h 00:06:00.987 TEST_HEADER include/spdk/crc32.h 00:06:00.987 TEST_HEADER include/spdk/crc64.h 00:06:00.987 TEST_HEADER include/spdk/dif.h 00:06:00.987 TEST_HEADER include/spdk/dma.h 00:06:00.987 TEST_HEADER include/spdk/endian.h 00:06:00.987 TEST_HEADER include/spdk/env.h 00:06:00.987 TEST_HEADER include/spdk/env_dpdk.h 00:06:00.987 TEST_HEADER include/spdk/event.h 00:06:00.987 TEST_HEADER include/spdk/fd_group.h 00:06:00.987 TEST_HEADER include/spdk/fd.h 00:06:00.987 TEST_HEADER include/spdk/file.h 00:06:00.987 TEST_HEADER include/spdk/fsdev.h 00:06:00.987 TEST_HEADER include/spdk/fsdev_module.h 00:06:00.987 TEST_HEADER include/spdk/ftl.h 00:06:00.987 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:00.987 TEST_HEADER include/spdk/gpt_spec.h 00:06:00.987 TEST_HEADER include/spdk/hexlify.h 00:06:00.987 TEST_HEADER include/spdk/histogram_data.h 00:06:00.987 TEST_HEADER include/spdk/idxd.h 00:06:00.987 TEST_HEADER include/spdk/idxd_spec.h 00:06:00.987 TEST_HEADER include/spdk/init.h 00:06:00.987 TEST_HEADER include/spdk/ioat.h 00:06:00.987 TEST_HEADER include/spdk/ioat_spec.h 00:06:00.987 CC app/spdk_dd/spdk_dd.o 00:06:00.987 TEST_HEADER include/spdk/iscsi_spec.h 00:06:00.987 TEST_HEADER include/spdk/jsonrpc.h 00:06:00.987 TEST_HEADER include/spdk/json.h 00:06:00.987 TEST_HEADER include/spdk/keyring.h 00:06:00.987 TEST_HEADER include/spdk/keyring_module.h 00:06:00.987 TEST_HEADER include/spdk/likely.h 00:06:00.987 TEST_HEADER include/spdk/lvol.h 00:06:00.987 TEST_HEADER include/spdk/log.h 00:06:00.987 TEST_HEADER include/spdk/md5.h 00:06:00.987 TEST_HEADER include/spdk/memory.h 00:06:00.987 TEST_HEADER include/spdk/mmio.h 00:06:00.987 TEST_HEADER include/spdk/nbd.h 00:06:00.987 TEST_HEADER include/spdk/net.h 00:06:00.987 TEST_HEADER include/spdk/notify.h 00:06:00.987 CC app/iscsi_tgt/iscsi_tgt.o 00:06:00.987 TEST_HEADER include/spdk/nvme.h 00:06:00.987 TEST_HEADER include/spdk/nvme_intel.h 00:06:00.987 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:00.987 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:00.987 TEST_HEADER include/spdk/nvme_spec.h 00:06:00.987 CC app/nvmf_tgt/nvmf_main.o 00:06:00.987 TEST_HEADER include/spdk/nvme_zns.h 00:06:00.987 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:00.987 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:00.987 TEST_HEADER include/spdk/nvmf.h 00:06:00.987 TEST_HEADER include/spdk/nvmf_spec.h 00:06:00.987 TEST_HEADER include/spdk/nvmf_transport.h 00:06:00.987 TEST_HEADER include/spdk/opal.h 00:06:00.987 TEST_HEADER include/spdk/opal_spec.h 00:06:00.987 TEST_HEADER include/spdk/pci_ids.h 00:06:00.987 TEST_HEADER include/spdk/pipe.h 00:06:00.987 TEST_HEADER include/spdk/queue.h 00:06:00.987 TEST_HEADER include/spdk/reduce.h 00:06:00.987 TEST_HEADER include/spdk/rpc.h 00:06:00.987 TEST_HEADER include/spdk/scheduler.h 00:06:00.987 TEST_HEADER include/spdk/scsi.h 00:06:00.987 TEST_HEADER include/spdk/sock.h 00:06:00.987 TEST_HEADER include/spdk/scsi_spec.h 00:06:00.987 TEST_HEADER include/spdk/stdinc.h 00:06:00.987 TEST_HEADER include/spdk/string.h 00:06:00.987 TEST_HEADER include/spdk/trace.h 00:06:00.987 TEST_HEADER include/spdk/thread.h 00:06:00.987 TEST_HEADER include/spdk/trace_parser.h 00:06:00.987 TEST_HEADER include/spdk/tree.h 00:06:00.987 TEST_HEADER include/spdk/ublk.h 00:06:00.987 TEST_HEADER include/spdk/uuid.h 00:06:00.987 TEST_HEADER include/spdk/util.h 00:06:00.987 TEST_HEADER include/spdk/version.h 00:06:00.987 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:00.987 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:00.987 TEST_HEADER include/spdk/vhost.h 00:06:00.987 TEST_HEADER include/spdk/vmd.h 00:06:00.987 TEST_HEADER include/spdk/xor.h 00:06:00.987 TEST_HEADER include/spdk/zipf.h 00:06:00.987 CXX test/cpp_headers/accel.o 00:06:00.987 CXX test/cpp_headers/accel_module.o 00:06:00.987 CXX test/cpp_headers/assert.o 00:06:00.987 CXX test/cpp_headers/barrier.o 00:06:00.987 CXX test/cpp_headers/base64.o 00:06:00.987 CXX test/cpp_headers/bdev.o 00:06:00.987 CXX test/cpp_headers/bdev_module.o 00:06:00.987 CXX test/cpp_headers/bdev_zone.o 00:06:00.987 CXX test/cpp_headers/bit_array.o 00:06:00.987 CXX test/cpp_headers/bit_pool.o 00:06:00.987 CXX test/cpp_headers/blob_bdev.o 00:06:00.987 CXX test/cpp_headers/blobfs_bdev.o 00:06:00.987 CXX test/cpp_headers/blobfs.o 00:06:00.987 CXX test/cpp_headers/blob.o 00:06:00.987 CXX test/cpp_headers/conf.o 00:06:00.987 CXX test/cpp_headers/config.o 00:06:00.987 CXX test/cpp_headers/cpuset.o 00:06:00.987 CXX test/cpp_headers/crc16.o 00:06:00.987 CC app/spdk_tgt/spdk_tgt.o 00:06:00.987 CC examples/ioat/perf/perf.o 00:06:00.987 CC examples/util/zipf/zipf.o 00:06:00.987 CC examples/ioat/verify/verify.o 00:06:00.987 CXX test/cpp_headers/crc32.o 00:06:00.987 CC app/fio/nvme/fio_plugin.o 00:06:00.987 CC test/env/vtophys/vtophys.o 00:06:00.987 CC test/app/histogram_perf/histogram_perf.o 00:06:00.987 CC test/env/memory/memory_ut.o 00:06:00.987 CC test/env/pci/pci_ut.o 00:06:00.987 CC test/app/jsoncat/jsoncat.o 00:06:00.987 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:00.987 CC test/app/stub/stub.o 00:06:00.987 CC test/thread/poller_perf/poller_perf.o 00:06:01.246 CC app/fio/bdev/fio_plugin.o 00:06:01.246 CC test/dma/test_dma/test_dma.o 00:06:01.246 CC test/app/bdev_svc/bdev_svc.o 00:06:01.246 LINK spdk_lspci 00:06:01.246 CC test/env/mem_callbacks/mem_callbacks.o 00:06:01.246 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:01.508 LINK spdk_nvme_discover 00:06:01.508 LINK rpc_client_test 00:06:01.508 LINK zipf 00:06:01.508 LINK nvmf_tgt 00:06:01.508 LINK interrupt_tgt 00:06:01.508 LINK histogram_perf 00:06:01.508 LINK vtophys 00:06:01.508 CXX test/cpp_headers/crc64.o 00:06:01.508 LINK jsoncat 00:06:01.508 CXX test/cpp_headers/dif.o 00:06:01.508 CXX test/cpp_headers/dma.o 00:06:01.508 LINK spdk_trace_record 00:06:01.508 LINK poller_perf 00:06:01.508 CXX test/cpp_headers/endian.o 00:06:01.508 LINK env_dpdk_post_init 00:06:01.508 LINK iscsi_tgt 00:06:01.508 LINK verify 00:06:01.508 CXX test/cpp_headers/env_dpdk.o 00:06:01.508 CXX test/cpp_headers/env.o 00:06:01.508 CXX test/cpp_headers/event.o 00:06:01.508 CXX test/cpp_headers/fd_group.o 00:06:01.508 CXX test/cpp_headers/fd.o 00:06:01.508 CXX test/cpp_headers/file.o 00:06:01.508 LINK stub 00:06:01.508 CXX test/cpp_headers/fsdev.o 00:06:01.508 CXX test/cpp_headers/fsdev_module.o 00:06:01.508 CXX test/cpp_headers/ftl.o 00:06:01.508 LINK spdk_tgt 00:06:01.508 CXX test/cpp_headers/fuse_dispatcher.o 00:06:01.508 LINK ioat_perf 00:06:01.508 CXX test/cpp_headers/gpt_spec.o 00:06:01.508 LINK bdev_svc 00:06:01.508 CXX test/cpp_headers/hexlify.o 00:06:01.508 CXX test/cpp_headers/histogram_data.o 00:06:01.508 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:01.775 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:01.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:01.775 LINK spdk_dd 00:06:01.775 CXX test/cpp_headers/idxd.o 00:06:01.775 CXX test/cpp_headers/idxd_spec.o 00:06:01.775 CXX test/cpp_headers/init.o 00:06:01.775 LINK spdk_trace 00:06:01.775 CXX test/cpp_headers/ioat.o 00:06:01.775 CXX test/cpp_headers/ioat_spec.o 00:06:01.775 CXX test/cpp_headers/iscsi_spec.o 00:06:01.775 CXX test/cpp_headers/json.o 00:06:01.775 CXX test/cpp_headers/jsonrpc.o 00:06:01.775 CXX test/cpp_headers/keyring.o 00:06:01.775 CXX test/cpp_headers/keyring_module.o 00:06:02.036 CXX test/cpp_headers/likely.o 00:06:02.036 CXX test/cpp_headers/log.o 00:06:02.036 CXX test/cpp_headers/lvol.o 00:06:02.036 CXX test/cpp_headers/md5.o 00:06:02.036 CXX test/cpp_headers/memory.o 00:06:02.036 CXX test/cpp_headers/mmio.o 00:06:02.036 CXX test/cpp_headers/nbd.o 00:06:02.036 CXX test/cpp_headers/net.o 00:06:02.036 CXX test/cpp_headers/notify.o 00:06:02.036 CXX test/cpp_headers/nvme.o 00:06:02.036 CXX test/cpp_headers/nvme_intel.o 00:06:02.036 LINK pci_ut 00:06:02.036 CXX test/cpp_headers/nvme_ocssd.o 00:06:02.036 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:02.036 CXX test/cpp_headers/nvme_spec.o 00:06:02.036 CXX test/cpp_headers/nvme_zns.o 00:06:02.036 CXX test/cpp_headers/nvmf_cmd.o 00:06:02.036 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:02.036 CXX test/cpp_headers/nvmf.o 00:06:02.036 CC examples/sock/hello_world/hello_sock.o 00:06:02.036 CXX test/cpp_headers/nvmf_spec.o 00:06:02.036 CC examples/thread/thread/thread_ex.o 00:06:02.299 CC examples/vmd/lsvmd/lsvmd.o 00:06:02.299 CC examples/idxd/perf/perf.o 00:06:02.299 LINK nvme_fuzz 00:06:02.299 LINK spdk_nvme 00:06:02.299 CC examples/vmd/led/led.o 00:06:02.299 LINK spdk_bdev 00:06:02.299 CXX test/cpp_headers/nvmf_transport.o 00:06:02.299 CC test/event/event_perf/event_perf.o 00:06:02.299 CC test/event/reactor/reactor.o 00:06:02.299 LINK test_dma 00:06:02.299 CC test/event/reactor_perf/reactor_perf.o 00:06:02.299 CXX test/cpp_headers/opal.o 00:06:02.299 CXX test/cpp_headers/opal_spec.o 00:06:02.299 CXX test/cpp_headers/pci_ids.o 00:06:02.299 CC test/event/app_repeat/app_repeat.o 00:06:02.299 CXX test/cpp_headers/pipe.o 00:06:02.299 CXX test/cpp_headers/queue.o 00:06:02.299 CXX test/cpp_headers/reduce.o 00:06:02.299 CXX test/cpp_headers/rpc.o 00:06:02.299 CXX test/cpp_headers/scheduler.o 00:06:02.299 CXX test/cpp_headers/scsi.o 00:06:02.299 CXX test/cpp_headers/scsi_spec.o 00:06:02.299 CXX test/cpp_headers/sock.o 00:06:02.299 CXX test/cpp_headers/stdinc.o 00:06:02.299 CXX test/cpp_headers/string.o 00:06:02.299 CXX test/cpp_headers/thread.o 00:06:02.568 CXX test/cpp_headers/trace.o 00:06:02.568 CXX test/cpp_headers/trace_parser.o 00:06:02.568 CXX test/cpp_headers/tree.o 00:06:02.568 CC test/event/scheduler/scheduler.o 00:06:02.568 LINK lsvmd 00:06:02.568 CXX test/cpp_headers/ublk.o 00:06:02.568 CXX test/cpp_headers/util.o 00:06:02.568 CXX test/cpp_headers/uuid.o 00:06:02.568 CXX test/cpp_headers/version.o 00:06:02.568 LINK spdk_nvme_identify 00:06:02.568 CXX test/cpp_headers/vfio_user_pci.o 00:06:02.568 CXX test/cpp_headers/vfio_user_spec.o 00:06:02.568 CXX test/cpp_headers/vhost.o 00:06:02.568 CXX test/cpp_headers/vmd.o 00:06:02.568 LINK led 00:06:02.568 CC app/vhost/vhost.o 00:06:02.568 LINK mem_callbacks 00:06:02.568 CXX test/cpp_headers/xor.o 00:06:02.568 LINK spdk_nvme_perf 00:06:02.568 CXX test/cpp_headers/zipf.o 00:06:02.568 LINK reactor 00:06:02.568 LINK event_perf 00:06:02.568 LINK reactor_perf 00:06:02.568 LINK vhost_fuzz 00:06:02.568 LINK hello_sock 00:06:02.568 LINK spdk_top 00:06:02.568 LINK app_repeat 00:06:02.828 LINK thread 00:06:02.828 LINK idxd_perf 00:06:02.828 LINK vhost 00:06:02.828 LINK scheduler 00:06:02.828 CC test/nvme/startup/startup.o 00:06:02.828 CC test/nvme/connect_stress/connect_stress.o 00:06:02.828 CC test/nvme/boot_partition/boot_partition.o 00:06:03.088 CC test/nvme/sgl/sgl.o 00:06:03.088 CC test/nvme/fdp/fdp.o 00:06:03.088 CC test/nvme/reset/reset.o 00:06:03.088 CC test/nvme/overhead/overhead.o 00:06:03.088 CC test/nvme/err_injection/err_injection.o 00:06:03.088 CC test/nvme/fused_ordering/fused_ordering.o 00:06:03.088 CC test/nvme/simple_copy/simple_copy.o 00:06:03.088 CC test/nvme/reserve/reserve.o 00:06:03.088 CC test/nvme/e2edp/nvme_dp.o 00:06:03.088 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:03.088 CC test/nvme/cuse/cuse.o 00:06:03.088 CC test/nvme/aer/aer.o 00:06:03.088 CC test/nvme/compliance/nvme_compliance.o 00:06:03.088 CC test/accel/dif/dif.o 00:06:03.088 CC test/blobfs/mkfs/mkfs.o 00:06:03.088 CC test/lvol/esnap/esnap.o 00:06:03.088 CC examples/nvme/hello_world/hello_world.o 00:06:03.088 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:03.088 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:03.088 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:03.088 CC examples/nvme/hotplug/hotplug.o 00:06:03.088 CC examples/nvme/abort/abort.o 00:06:03.088 CC examples/nvme/arbitration/arbitration.o 00:06:03.088 CC examples/nvme/reconnect/reconnect.o 00:06:03.347 LINK boot_partition 00:06:03.347 LINK startup 00:06:03.347 LINK memory_ut 00:06:03.347 LINK doorbell_aers 00:06:03.347 LINK reserve 00:06:03.347 LINK mkfs 00:06:03.347 CC examples/accel/perf/accel_perf.o 00:06:03.347 LINK connect_stress 00:06:03.347 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:03.347 LINK err_injection 00:06:03.347 LINK fused_ordering 00:06:03.347 LINK pmr_persistence 00:06:03.347 CC examples/blob/cli/blobcli.o 00:06:03.347 LINK reset 00:06:03.347 LINK nvme_compliance 00:06:03.347 CC examples/blob/hello_world/hello_blob.o 00:06:03.347 LINK aer 00:06:03.347 LINK fdp 00:06:03.347 LINK simple_copy 00:06:03.347 LINK cmb_copy 00:06:03.606 LINK nvme_dp 00:06:03.606 LINK sgl 00:06:03.606 LINK hotplug 00:06:03.606 LINK overhead 00:06:03.606 LINK hello_world 00:06:03.606 LINK arbitration 00:06:03.606 LINK abort 00:06:03.606 LINK hello_fsdev 00:06:03.864 LINK hello_blob 00:06:03.864 LINK nvme_manage 00:06:03.864 LINK reconnect 00:06:03.864 LINK dif 00:06:04.123 LINK blobcli 00:06:04.123 LINK accel_perf 00:06:04.381 LINK iscsi_fuzz 00:06:04.381 CC examples/bdev/hello_world/hello_bdev.o 00:06:04.642 CC test/bdev/bdevio/bdevio.o 00:06:04.642 CC examples/bdev/bdevperf/bdevperf.o 00:06:04.642 LINK cuse 00:06:05.212 LINK bdevio 00:06:05.212 LINK hello_bdev 00:06:06.598 LINK bdevperf 00:06:07.171 CC examples/nvmf/nvmf/nvmf.o 00:06:07.432 LINK nvmf 00:06:15.566 LINK esnap 00:06:15.566 00:06:15.566 real 1m48.593s 00:06:15.566 user 13m56.917s 00:06:15.566 sys 2m53.711s 00:06:15.566 20:33:43 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:15.566 20:33:43 make -- common/autotest_common.sh@10 -- $ set +x 00:06:15.566 ************************************ 00:06:15.566 END TEST make 00:06:15.566 ************************************ 00:06:15.566 20:33:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:15.566 20:33:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:15.566 20:33:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:15.566 20:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.566 20:33:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:15.566 20:33:43 -- pm/common@44 -- $ pid=1498367 00:06:15.566 20:33:43 -- pm/common@50 -- $ kill -TERM 1498367 00:06:15.566 20:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.566 20:33:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:15.566 20:33:43 -- pm/common@44 -- $ pid=1498369 00:06:15.566 20:33:43 -- pm/common@50 -- $ kill -TERM 1498369 00:06:15.566 20:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.566 20:33:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:15.566 20:33:43 -- pm/common@44 -- $ pid=1498371 00:06:15.566 20:33:43 -- pm/common@50 -- $ kill -TERM 1498371 00:06:15.566 20:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.566 20:33:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:15.566 20:33:43 -- pm/common@44 -- $ pid=1498403 00:06:15.566 20:33:43 -- pm/common@50 -- $ sudo -E kill -TERM 1498403 00:06:15.566 20:33:43 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.566 20:33:43 -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.566 20:33:43 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.566 20:33:43 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.566 20:33:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.566 20:33:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.566 20:33:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.566 20:33:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.566 20:33:43 -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.566 20:33:43 -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.566 20:33:43 -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.566 20:33:43 -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.566 20:33:43 -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.566 20:33:43 -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.566 20:33:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.566 20:33:43 -- scripts/common.sh@344 -- # case "$op" in 00:06:15.566 20:33:43 -- scripts/common.sh@345 -- # : 1 00:06:15.566 20:33:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.566 20:33:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.566 20:33:43 -- scripts/common.sh@365 -- # decimal 1 00:06:15.566 20:33:43 -- scripts/common.sh@353 -- # local d=1 00:06:15.566 20:33:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.566 20:33:43 -- scripts/common.sh@355 -- # echo 1 00:06:15.566 20:33:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.566 20:33:43 -- scripts/common.sh@366 -- # decimal 2 00:06:15.566 20:33:43 -- scripts/common.sh@353 -- # local d=2 00:06:15.566 20:33:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.566 20:33:43 -- scripts/common.sh@355 -- # echo 2 00:06:15.566 20:33:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.566 20:33:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.567 20:33:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.567 20:33:43 -- scripts/common.sh@368 -- # return 0 00:06:15.567 20:33:43 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.567 20:33:43 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.567 --rc genhtml_branch_coverage=1 00:06:15.567 --rc genhtml_function_coverage=1 00:06:15.567 --rc genhtml_legend=1 00:06:15.567 --rc geninfo_all_blocks=1 00:06:15.567 --rc geninfo_unexecuted_blocks=1 00:06:15.567 00:06:15.567 ' 00:06:15.567 20:33:43 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.567 --rc genhtml_branch_coverage=1 00:06:15.567 --rc genhtml_function_coverage=1 00:06:15.567 --rc genhtml_legend=1 00:06:15.567 --rc geninfo_all_blocks=1 00:06:15.567 --rc geninfo_unexecuted_blocks=1 00:06:15.567 00:06:15.567 ' 00:06:15.567 20:33:43 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.567 --rc genhtml_branch_coverage=1 00:06:15.567 --rc genhtml_function_coverage=1 00:06:15.567 --rc genhtml_legend=1 00:06:15.567 --rc geninfo_all_blocks=1 00:06:15.567 --rc geninfo_unexecuted_blocks=1 00:06:15.567 00:06:15.567 ' 00:06:15.567 20:33:43 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.567 --rc genhtml_branch_coverage=1 00:06:15.567 --rc genhtml_function_coverage=1 00:06:15.567 --rc genhtml_legend=1 00:06:15.567 --rc geninfo_all_blocks=1 00:06:15.567 --rc geninfo_unexecuted_blocks=1 00:06:15.567 00:06:15.567 ' 00:06:15.567 20:33:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.567 20:33:43 -- nvmf/common.sh@7 -- # uname -s 00:06:15.567 20:33:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.567 20:33:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.567 20:33:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.567 20:33:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.567 20:33:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.567 20:33:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.567 20:33:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.567 20:33:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.567 20:33:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.567 20:33:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.567 20:33:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:15.567 20:33:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:15.567 20:33:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.567 20:33:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.567 20:33:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.567 20:33:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.567 20:33:43 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.567 20:33:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.567 20:33:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.567 20:33:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.567 20:33:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.567 20:33:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.567 20:33:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.567 20:33:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.567 20:33:43 -- paths/export.sh@5 -- # export PATH 00:06:15.567 20:33:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.567 20:33:43 -- nvmf/common.sh@51 -- # : 0 00:06:15.567 20:33:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.567 20:33:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.567 20:33:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.567 20:33:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.567 20:33:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.567 20:33:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.567 20:33:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.567 20:33:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.567 20:33:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.567 20:33:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:15.567 20:33:43 -- spdk/autotest.sh@32 -- # uname -s 00:06:15.567 20:33:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:15.567 20:33:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:15.567 20:33:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:15.567 20:33:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:15.567 20:33:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:15.567 20:33:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:15.567 20:33:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:15.567 20:33:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:15.567 20:33:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1563727 00:06:15.567 20:33:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:15.567 20:33:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:15.567 20:33:43 -- pm/common@17 -- # local monitor 00:06:15.567 20:33:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.567 20:33:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.567 20:33:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.567 20:33:43 -- pm/common@21 -- # date +%s 00:06:15.567 20:33:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:15.567 20:33:43 -- pm/common@21 -- # date +%s 00:06:15.567 20:33:43 -- pm/common@25 -- # sleep 1 00:06:15.567 20:33:43 -- pm/common@21 -- # date +%s 00:06:15.567 20:33:43 -- pm/common@21 -- # date +%s 00:06:15.567 20:33:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728412423 00:06:15.567 20:33:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728412423 00:06:15.567 20:33:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728412423 00:06:15.567 20:33:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728412423 00:06:15.567 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728412423_collect-cpu-load.pm.log 00:06:15.567 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728412423_collect-vmstat.pm.log 00:06:15.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728412423_collect-cpu-temp.pm.log 00:06:15.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728412423_collect-bmc-pm.bmc.pm.log 00:06:16.133 20:33:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:16.133 20:33:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:16.133 20:33:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.133 20:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.133 20:33:44 -- spdk/autotest.sh@59 -- # create_test_list 00:06:16.133 20:33:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:16.133 20:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.133 20:33:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:16.133 20:33:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:16.133 20:33:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:16.133 20:33:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:16.133 20:33:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:16.133 20:33:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:16.133 20:33:44 -- common/autotest_common.sh@1455 -- # uname 00:06:16.133 20:33:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:16.133 20:33:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:16.133 20:33:44 -- common/autotest_common.sh@1475 -- # uname 00:06:16.133 20:33:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:16.133 20:33:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:16.133 20:33:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:16.133 lcov: LCOV version 1.15 00:06:16.133 20:33:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:54.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:54.843 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:51.158 20:35:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:51.158 20:35:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.158 20:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:51.158 20:35:15 -- spdk/autotest.sh@78 -- # rm -f 00:07:51.158 20:35:15 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:51.158 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:07:51.158 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:07:51.158 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:07:51.158 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:07:51.158 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:07:51.158 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:07:51.158 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:07:51.158 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:07:51.158 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:07:51.158 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:07:51.158 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:07:51.158 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:07:51.158 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:07:51.158 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:07:51.158 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:07:51.158 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:07:51.158 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:07:51.158 20:35:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:51.158 20:35:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:51.158 20:35:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:51.158 20:35:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:51.158 20:35:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:51.158 20:35:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:51.158 20:35:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:51.158 20:35:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:51.158 20:35:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:51.158 20:35:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:51.158 20:35:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:51.158 20:35:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:51.158 20:35:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:51.158 20:35:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:51.158 20:35:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:51.158 No valid GPT data, bailing 00:07:51.158 20:35:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:51.158 20:35:17 -- scripts/common.sh@394 -- # pt= 00:07:51.158 20:35:17 -- scripts/common.sh@395 -- # return 1 00:07:51.158 20:35:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:51.158 1+0 records in 00:07:51.158 1+0 records out 00:07:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0020813 s, 504 MB/s 00:07:51.158 20:35:17 -- spdk/autotest.sh@105 -- # sync 00:07:51.158 20:35:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:51.158 20:35:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:51.158 20:35:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:51.418 20:35:20 -- spdk/autotest.sh@111 -- # uname -s 00:07:51.418 20:35:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:51.418 20:35:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:51.418 20:35:20 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:53.326 Hugepages 00:07:53.326 node hugesize free / total 00:07:53.326 node0 1048576kB 0 / 0 00:07:53.326 node0 2048kB 0 / 0 00:07:53.326 node1 1048576kB 0 / 0 00:07:53.326 node1 2048kB 0 / 0 00:07:53.326 00:07:53.326 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:53.326 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:53.326 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:53.326 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:53.326 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:53.326 20:35:21 -- spdk/autotest.sh@117 -- # uname -s 00:07:53.326 20:35:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:53.326 20:35:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:53.326 20:35:21 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:54.707 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:54.707 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:54.707 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:55.645 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:07:55.904 20:35:24 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:56.838 20:35:25 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:56.838 20:35:25 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:56.838 20:35:25 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.838 20:35:25 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:56.838 20:35:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:56.838 20:35:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:56.838 20:35:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.838 20:35:25 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:56.838 20:35:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:56.838 20:35:25 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:56.838 20:35:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:07:56.838 20:35:25 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:58.739 Waiting for block devices as requested 00:07:58.739 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:07:58.739 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:58.739 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:58.997 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:58.997 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:58.997 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:59.255 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:59.255 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:59.255 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:59.513 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:59.513 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:59.513 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:59.513 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:59.770 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:59.770 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:59.770 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:59.770 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:00.028 20:35:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:00.028 20:35:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1485 -- # grep 0000:82:00.0/nvme/nvme 00:08:00.028 20:35:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:08:00.028 20:35:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:00.028 20:35:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:00.028 20:35:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:00.028 20:35:28 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:08:00.028 20:35:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:00.028 20:35:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:00.028 20:35:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:00.028 20:35:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:00.028 20:35:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:00.028 20:35:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:00.028 20:35:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:00.028 20:35:28 -- common/autotest_common.sh@1541 -- # continue 00:08:00.028 20:35:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:00.028 20:35:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.028 20:35:28 -- common/autotest_common.sh@10 -- # set +x 00:08:00.028 20:35:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:00.028 20:35:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.028 20:35:28 -- common/autotest_common.sh@10 -- # set +x 00:08:00.028 20:35:28 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:01.930 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:01.930 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:01.930 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:02.870 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:08:02.870 20:35:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:02.870 20:35:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.870 20:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:02.870 20:35:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:02.870 20:35:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:02.870 20:35:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:02.870 20:35:31 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:02.870 20:35:31 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:02.870 20:35:31 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:02.870 20:35:31 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:02.870 20:35:31 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:02.870 20:35:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:02.870 20:35:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:02.870 20:35:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:02.870 20:35:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:02.870 20:35:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:03.131 20:35:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:03.131 20:35:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:08:03.131 20:35:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:03.131 20:35:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:08:03.131 20:35:31 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:08:03.131 20:35:31 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:03.131 20:35:31 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:08:03.131 20:35:31 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:08:03.131 20:35:31 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:82:00.0 00:08:03.131 20:35:31 -- common/autotest_common.sh@1577 -- # [[ -z 0000:82:00.0 ]] 00:08:03.131 20:35:31 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1580547 00:08:03.131 20:35:31 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.131 20:35:31 -- common/autotest_common.sh@1583 -- # waitforlisten 1580547 00:08:03.131 20:35:31 -- common/autotest_common.sh@831 -- # '[' -z 1580547 ']' 00:08:03.131 20:35:31 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.131 20:35:31 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.131 20:35:31 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.131 20:35:31 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.131 20:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:03.131 [2024-10-08 20:35:31.759972] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:03.131 [2024-10-08 20:35:31.760067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580547 ] 00:08:03.131 [2024-10-08 20:35:31.828632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.392 [2024-10-08 20:35:32.051927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.959 20:35:32 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.959 20:35:32 -- common/autotest_common.sh@864 -- # return 0 00:08:03.959 20:35:32 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:03.959 20:35:32 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:03.959 20:35:32 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:08:07.261 nvme0n1 00:08:07.261 20:35:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:07.520 [2024-10-08 20:35:36.046481] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:08:07.520 [2024-10-08 20:35:36.046578] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:08:07.520 request: 00:08:07.520 { 00:08:07.520 "nvme_ctrlr_name": "nvme0", 00:08:07.520 "password": "test", 00:08:07.520 "method": "bdev_nvme_opal_revert", 00:08:07.520 "req_id": 1 00:08:07.520 } 00:08:07.520 Got JSON-RPC error response 00:08:07.520 response: 00:08:07.520 { 00:08:07.520 "code": -32603, 00:08:07.520 "message": "Internal error" 00:08:07.520 } 00:08:07.520 20:35:36 -- common/autotest_common.sh@1589 -- # true 00:08:07.520 20:35:36 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:07.520 20:35:36 -- common/autotest_common.sh@1593 -- # killprocess 1580547 00:08:07.520 20:35:36 -- common/autotest_common.sh@950 -- # '[' -z 1580547 ']' 00:08:07.520 20:35:36 -- common/autotest_common.sh@954 -- # kill -0 1580547 00:08:07.520 20:35:36 -- common/autotest_common.sh@955 -- # uname 00:08:07.520 20:35:36 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.520 20:35:36 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580547 00:08:07.520 20:35:36 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.520 20:35:36 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.520 20:35:36 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580547' 00:08:07.520 killing process with pid 1580547 00:08:07.520 20:35:36 -- common/autotest_common.sh@969 -- # kill 1580547 00:08:07.520 20:35:36 -- common/autotest_common.sh@974 -- # wait 1580547 00:08:10.049 20:35:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:10.049 20:35:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:10.049 20:35:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:10.049 20:35:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:10.049 20:35:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:10.049 20:35:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.049 20:35:38 -- common/autotest_common.sh@10 -- # set +x 00:08:10.049 20:35:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:10.049 20:35:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:10.049 20:35:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.049 20:35:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.049 20:35:38 -- common/autotest_common.sh@10 -- # set +x 00:08:10.049 ************************************ 00:08:10.049 START TEST env 00:08:10.049 ************************************ 00:08:10.049 20:35:38 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:10.049 * Looking for test storage... 00:08:10.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:10.049 20:35:38 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:10.049 20:35:38 env -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.049 20:35:38 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:10.049 20:35:38 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:10.049 20:35:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.049 20:35:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.049 20:35:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.049 20:35:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.049 20:35:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.049 20:35:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.050 20:35:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.050 20:35:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.050 20:35:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.050 20:35:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.050 20:35:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.050 20:35:38 env -- scripts/common.sh@344 -- # case "$op" in 00:08:10.050 20:35:38 env -- scripts/common.sh@345 -- # : 1 00:08:10.050 20:35:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.050 20:35:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.050 20:35:38 env -- scripts/common.sh@365 -- # decimal 1 00:08:10.050 20:35:38 env -- scripts/common.sh@353 -- # local d=1 00:08:10.050 20:35:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.050 20:35:38 env -- scripts/common.sh@355 -- # echo 1 00:08:10.050 20:35:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.050 20:35:38 env -- scripts/common.sh@366 -- # decimal 2 00:08:10.050 20:35:38 env -- scripts/common.sh@353 -- # local d=2 00:08:10.050 20:35:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.050 20:35:38 env -- scripts/common.sh@355 -- # echo 2 00:08:10.050 20:35:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.050 20:35:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.050 20:35:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.050 20:35:38 env -- scripts/common.sh@368 -- # return 0 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.050 --rc genhtml_branch_coverage=1 00:08:10.050 --rc genhtml_function_coverage=1 00:08:10.050 --rc genhtml_legend=1 00:08:10.050 --rc geninfo_all_blocks=1 00:08:10.050 --rc geninfo_unexecuted_blocks=1 00:08:10.050 00:08:10.050 ' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.050 --rc genhtml_branch_coverage=1 00:08:10.050 --rc genhtml_function_coverage=1 00:08:10.050 --rc genhtml_legend=1 00:08:10.050 --rc geninfo_all_blocks=1 00:08:10.050 --rc geninfo_unexecuted_blocks=1 00:08:10.050 00:08:10.050 ' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.050 --rc genhtml_branch_coverage=1 00:08:10.050 --rc genhtml_function_coverage=1 00:08:10.050 --rc genhtml_legend=1 00:08:10.050 --rc geninfo_all_blocks=1 00:08:10.050 --rc geninfo_unexecuted_blocks=1 00:08:10.050 00:08:10.050 ' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.050 --rc genhtml_branch_coverage=1 00:08:10.050 --rc genhtml_function_coverage=1 00:08:10.050 --rc genhtml_legend=1 00:08:10.050 --rc geninfo_all_blocks=1 00:08:10.050 --rc geninfo_unexecuted_blocks=1 00:08:10.050 00:08:10.050 ' 00:08:10.050 20:35:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.050 20:35:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 ************************************ 00:08:10.050 START TEST env_memory 00:08:10.050 ************************************ 00:08:10.050 20:35:38 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.050 00:08:10.050 00:08:10.050 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.050 http://cunit.sourceforge.net/ 00:08:10.050 00:08:10.050 00:08:10.050 Suite: memory 00:08:10.050 Test: alloc and free memory map ...[2024-10-08 20:35:38.549192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:10.050 passed 00:08:10.050 Test: mem map translation ...[2024-10-08 20:35:38.578350] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:10.050 [2024-10-08 20:35:38.578383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:10.050 [2024-10-08 20:35:38.578452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:10.050 [2024-10-08 20:35:38.578469] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:10.050 passed 00:08:10.050 Test: mem map registration ...[2024-10-08 20:35:38.639917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:10.050 [2024-10-08 20:35:38.639947] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:10.050 passed 00:08:10.050 Test: mem map adjacent registrations ...passed 00:08:10.050 00:08:10.050 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.050 suites 1 1 n/a 0 0 00:08:10.050 tests 4 4 4 0 0 00:08:10.050 asserts 152 152 152 0 n/a 00:08:10.050 00:08:10.050 Elapsed time = 0.202 seconds 00:08:10.050 00:08:10.050 real 0m0.212s 00:08:10.050 user 0m0.203s 00:08:10.050 sys 0m0.008s 00:08:10.050 20:35:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.050 20:35:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 ************************************ 00:08:10.050 END TEST env_memory 00:08:10.050 ************************************ 00:08:10.050 20:35:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.050 20:35:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.050 20:35:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 ************************************ 00:08:10.050 START TEST env_vtophys 00:08:10.050 ************************************ 00:08:10.050 20:35:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.050 EAL: lib.eal log level changed from notice to debug 00:08:10.050 EAL: Detected lcore 0 as core 0 on socket 0 00:08:10.050 EAL: Detected lcore 1 as core 1 on socket 0 00:08:10.050 EAL: Detected lcore 2 as core 2 on socket 0 00:08:10.050 EAL: Detected lcore 3 as core 3 on socket 0 00:08:10.050 EAL: Detected lcore 4 as core 4 on socket 0 00:08:10.050 EAL: Detected lcore 5 as core 5 on socket 0 00:08:10.050 EAL: Detected lcore 6 as core 8 on socket 0 00:08:10.050 EAL: Detected lcore 7 as core 9 on socket 0 00:08:10.050 EAL: Detected lcore 8 as core 10 on socket 0 00:08:10.050 EAL: Detected lcore 9 as core 11 on socket 0 00:08:10.050 EAL: Detected lcore 10 as core 12 on socket 0 00:08:10.050 EAL: Detected lcore 11 as core 13 on socket 0 00:08:10.050 EAL: Detected lcore 12 as core 0 on socket 1 00:08:10.050 EAL: Detected lcore 13 as core 1 on socket 1 00:08:10.050 EAL: Detected lcore 14 as core 2 on socket 1 00:08:10.050 EAL: Detected lcore 15 as core 3 on socket 1 00:08:10.050 EAL: Detected lcore 16 as core 4 on socket 1 00:08:10.050 EAL: Detected lcore 17 as core 5 on socket 1 00:08:10.050 EAL: Detected lcore 18 as core 8 on socket 1 00:08:10.050 EAL: Detected lcore 19 as core 9 on socket 1 00:08:10.050 EAL: Detected lcore 20 as core 10 on socket 1 00:08:10.050 EAL: Detected lcore 21 as core 11 on socket 1 00:08:10.050 EAL: Detected lcore 22 as core 12 on socket 1 00:08:10.050 EAL: Detected lcore 23 as core 13 on socket 1 00:08:10.050 EAL: Detected lcore 24 as core 0 on socket 0 00:08:10.050 EAL: Detected lcore 25 as core 1 on socket 0 00:08:10.050 EAL: Detected lcore 26 as core 2 on socket 0 00:08:10.050 EAL: Detected lcore 27 as core 3 on socket 0 00:08:10.050 EAL: Detected lcore 28 as core 4 on socket 0 00:08:10.050 EAL: Detected lcore 29 as core 5 on socket 0 00:08:10.050 EAL: Detected lcore 30 as core 8 on socket 0 00:08:10.050 EAL: Detected lcore 31 as core 9 on socket 0 00:08:10.050 EAL: Detected lcore 32 as core 10 on socket 0 00:08:10.050 EAL: Detected lcore 33 as core 11 on socket 0 00:08:10.050 EAL: Detected lcore 34 as core 12 on socket 0 00:08:10.050 EAL: Detected lcore 35 as core 13 on socket 0 00:08:10.050 EAL: Detected lcore 36 as core 0 on socket 1 00:08:10.311 EAL: Detected lcore 37 as core 1 on socket 1 00:08:10.311 EAL: Detected lcore 38 as core 2 on socket 1 00:08:10.311 EAL: Detected lcore 39 as core 3 on socket 1 00:08:10.311 EAL: Detected lcore 40 as core 4 on socket 1 00:08:10.311 EAL: Detected lcore 41 as core 5 on socket 1 00:08:10.311 EAL: Detected lcore 42 as core 8 on socket 1 00:08:10.311 EAL: Detected lcore 43 as core 9 on socket 1 00:08:10.311 EAL: Detected lcore 44 as core 10 on socket 1 00:08:10.311 EAL: Detected lcore 45 as core 11 on socket 1 00:08:10.311 EAL: Detected lcore 46 as core 12 on socket 1 00:08:10.311 EAL: Detected lcore 47 as core 13 on socket 1 00:08:10.311 EAL: Maximum logical cores by configuration: 128 00:08:10.311 EAL: Detected CPU lcores: 48 00:08:10.311 EAL: Detected NUMA nodes: 2 00:08:10.311 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:10.311 EAL: Detected shared linkage of DPDK 00:08:10.311 EAL: No shared files mode enabled, IPC will be disabled 00:08:10.311 EAL: Bus pci wants IOVA as 'DC' 00:08:10.311 EAL: Buses did not request a specific IOVA mode. 00:08:10.311 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:10.311 EAL: Selected IOVA mode 'VA' 00:08:10.311 EAL: Probing VFIO support... 00:08:10.311 EAL: IOMMU type 1 (Type 1) is supported 00:08:10.311 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:10.311 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:10.311 EAL: VFIO support initialized 00:08:10.311 EAL: Ask a virtual area of 0x2e000 bytes 00:08:10.311 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:10.311 EAL: Setting up physically contiguous memory... 00:08:10.311 EAL: Setting maximum number of open files to 524288 00:08:10.311 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:10.311 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:10.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:10.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:10.311 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.311 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:10.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.311 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.311 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:10.311 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:10.311 EAL: Hugepages will be freed exactly as allocated. 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: TSC frequency is ~2700000 KHz 00:08:10.311 EAL: Main lcore 0 is ready (tid=7f99c4114a00;cpuset=[0]) 00:08:10.311 EAL: Trying to obtain current memory policy. 00:08:10.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.311 EAL: Restoring previous memory policy: 0 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was expanded by 2MB 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:10.311 EAL: Mem event callback 'spdk:(nil)' registered 00:08:10.311 00:08:10.311 00:08:10.311 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.311 http://cunit.sourceforge.net/ 00:08:10.311 00:08:10.311 00:08:10.311 Suite: components_suite 00:08:10.311 Test: vtophys_malloc_test ...passed 00:08:10.311 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:10.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.311 EAL: Restoring previous memory policy: 4 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was expanded by 4MB 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was shrunk by 4MB 00:08:10.311 EAL: Trying to obtain current memory policy. 00:08:10.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.311 EAL: Restoring previous memory policy: 4 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was expanded by 6MB 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was shrunk by 6MB 00:08:10.311 EAL: Trying to obtain current memory policy. 00:08:10.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.311 EAL: Restoring previous memory policy: 4 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was expanded by 10MB 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.311 EAL: request: mp_malloc_sync 00:08:10.311 EAL: No shared files mode enabled, IPC is disabled 00:08:10.311 EAL: Heap on socket 0 was shrunk by 10MB 00:08:10.311 EAL: Trying to obtain current memory policy. 00:08:10.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.311 EAL: Restoring previous memory policy: 4 00:08:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was expanded by 18MB 00:08:10.312 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was shrunk by 18MB 00:08:10.312 EAL: Trying to obtain current memory policy. 00:08:10.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.312 EAL: Restoring previous memory policy: 4 00:08:10.312 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was expanded by 34MB 00:08:10.312 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was shrunk by 34MB 00:08:10.312 EAL: Trying to obtain current memory policy. 00:08:10.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.312 EAL: Restoring previous memory policy: 4 00:08:10.312 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was expanded by 66MB 00:08:10.312 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.312 EAL: request: mp_malloc_sync 00:08:10.312 EAL: No shared files mode enabled, IPC is disabled 00:08:10.312 EAL: Heap on socket 0 was shrunk by 66MB 00:08:10.312 EAL: Trying to obtain current memory policy. 00:08:10.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.570 EAL: Restoring previous memory policy: 4 00:08:10.570 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.570 EAL: request: mp_malloc_sync 00:08:10.570 EAL: No shared files mode enabled, IPC is disabled 00:08:10.570 EAL: Heap on socket 0 was expanded by 130MB 00:08:10.570 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.570 EAL: request: mp_malloc_sync 00:08:10.570 EAL: No shared files mode enabled, IPC is disabled 00:08:10.570 EAL: Heap on socket 0 was shrunk by 130MB 00:08:10.570 EAL: Trying to obtain current memory policy. 00:08:10.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.570 EAL: Restoring previous memory policy: 4 00:08:10.570 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.570 EAL: request: mp_malloc_sync 00:08:10.570 EAL: No shared files mode enabled, IPC is disabled 00:08:10.570 EAL: Heap on socket 0 was expanded by 258MB 00:08:10.570 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.830 EAL: request: mp_malloc_sync 00:08:10.830 EAL: No shared files mode enabled, IPC is disabled 00:08:10.830 EAL: Heap on socket 0 was shrunk by 258MB 00:08:10.830 EAL: Trying to obtain current memory policy. 00:08:10.830 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.089 EAL: Restoring previous memory policy: 4 00:08:11.089 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.089 EAL: request: mp_malloc_sync 00:08:11.089 EAL: No shared files mode enabled, IPC is disabled 00:08:11.089 EAL: Heap on socket 0 was expanded by 514MB 00:08:11.089 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.349 EAL: request: mp_malloc_sync 00:08:11.349 EAL: No shared files mode enabled, IPC is disabled 00:08:11.349 EAL: Heap on socket 0 was shrunk by 514MB 00:08:11.349 EAL: Trying to obtain current memory policy. 00:08:11.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.919 EAL: Restoring previous memory policy: 4 00:08:11.919 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.919 EAL: request: mp_malloc_sync 00:08:11.919 EAL: No shared files mode enabled, IPC is disabled 00:08:11.919 EAL: Heap on socket 0 was expanded by 1026MB 00:08:11.919 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.178 EAL: request: mp_malloc_sync 00:08:12.178 EAL: No shared files mode enabled, IPC is disabled 00:08:12.178 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:12.178 passed 00:08:12.178 00:08:12.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.178 suites 1 1 n/a 0 0 00:08:12.178 tests 2 2 2 0 0 00:08:12.178 asserts 497 497 497 0 n/a 00:08:12.178 00:08:12.178 Elapsed time = 1.901 seconds 00:08:12.178 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.178 EAL: request: mp_malloc_sync 00:08:12.178 EAL: No shared files mode enabled, IPC is disabled 00:08:12.178 EAL: Heap on socket 0 was shrunk by 2MB 00:08:12.178 EAL: No shared files mode enabled, IPC is disabled 00:08:12.178 EAL: No shared files mode enabled, IPC is disabled 00:08:12.178 EAL: No shared files mode enabled, IPC is disabled 00:08:12.178 00:08:12.178 real 0m2.140s 00:08:12.178 user 0m1.051s 00:08:12.178 sys 0m1.029s 00:08:12.178 20:35:40 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.178 20:35:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:12.178 ************************************ 00:08:12.178 END TEST env_vtophys 00:08:12.178 ************************************ 00:08:12.438 20:35:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:12.438 20:35:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.438 20:35:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.438 20:35:40 env -- common/autotest_common.sh@10 -- # set +x 00:08:12.438 ************************************ 00:08:12.438 START TEST env_pci 00:08:12.438 ************************************ 00:08:12.438 20:35:40 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:12.438 00:08:12.438 00:08:12.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.438 http://cunit.sourceforge.net/ 00:08:12.438 00:08:12.438 00:08:12.438 Suite: pci 00:08:12.438 Test: pci_hook ...[2024-10-08 20:35:41.003685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1581700 has claimed it 00:08:12.438 EAL: Cannot find device (10000:00:01.0) 00:08:12.438 EAL: Failed to attach device on primary process 00:08:12.438 passed 00:08:12.438 00:08:12.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.438 suites 1 1 n/a 0 0 00:08:12.438 tests 1 1 1 0 0 00:08:12.438 asserts 25 25 25 0 n/a 00:08:12.438 00:08:12.438 Elapsed time = 0.022 seconds 00:08:12.438 00:08:12.438 real 0m0.036s 00:08:12.438 user 0m0.012s 00:08:12.438 sys 0m0.024s 00:08:12.438 20:35:41 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.438 20:35:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:12.438 ************************************ 00:08:12.438 END TEST env_pci 00:08:12.438 ************************************ 00:08:12.438 20:35:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:12.438 20:35:41 env -- env/env.sh@15 -- # uname 00:08:12.438 20:35:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:12.438 20:35:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:12.439 20:35:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:12.439 20:35:41 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.439 20:35:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.439 20:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:08:12.439 ************************************ 00:08:12.439 START TEST env_dpdk_post_init 00:08:12.439 ************************************ 00:08:12.439 20:35:41 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:12.439 EAL: Detected CPU lcores: 48 00:08:12.439 EAL: Detected NUMA nodes: 2 00:08:12.439 EAL: Detected shared linkage of DPDK 00:08:12.439 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:12.439 EAL: Selected IOVA mode 'VA' 00:08:12.439 EAL: VFIO support initialized 00:08:12.697 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:12.697 EAL: Using IOMMU type 1 (Type 1) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:08:12.697 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:08:12.955 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:08:13.889 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:08:17.206 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:08:17.206 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:08:17.206 Starting DPDK initialization... 00:08:17.206 Starting SPDK post initialization... 00:08:17.206 SPDK NVMe probe 00:08:17.206 Attaching to 0000:82:00.0 00:08:17.206 Attached to 0000:82:00.0 00:08:17.206 Cleaning up... 00:08:17.206 00:08:17.206 real 0m4.568s 00:08:17.206 user 0m3.035s 00:08:17.206 sys 0m0.576s 00:08:17.206 20:35:45 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.206 20:35:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:17.206 ************************************ 00:08:17.206 END TEST env_dpdk_post_init 00:08:17.206 ************************************ 00:08:17.206 20:35:45 env -- env/env.sh@26 -- # uname 00:08:17.206 20:35:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:17.206 20:35:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.206 20:35:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.206 20:35:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.206 20:35:45 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.206 ************************************ 00:08:17.206 START TEST env_mem_callbacks 00:08:17.206 ************************************ 00:08:17.206 20:35:45 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.206 EAL: Detected CPU lcores: 48 00:08:17.206 EAL: Detected NUMA nodes: 2 00:08:17.206 EAL: Detected shared linkage of DPDK 00:08:17.206 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.206 EAL: Selected IOVA mode 'VA' 00:08:17.206 EAL: VFIO support initialized 00:08:17.206 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.206 00:08:17.206 00:08:17.206 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.206 http://cunit.sourceforge.net/ 00:08:17.206 00:08:17.206 00:08:17.206 Suite: memory 00:08:17.206 Test: test ... 00:08:17.206 register 0x200000200000 2097152 00:08:17.206 malloc 3145728 00:08:17.206 register 0x200000400000 4194304 00:08:17.206 buf 0x200000500000 len 3145728 PASSED 00:08:17.206 malloc 64 00:08:17.206 buf 0x2000004fff40 len 64 PASSED 00:08:17.206 malloc 4194304 00:08:17.206 register 0x200000800000 6291456 00:08:17.206 buf 0x200000a00000 len 4194304 PASSED 00:08:17.206 free 0x200000500000 3145728 00:08:17.206 free 0x2000004fff40 64 00:08:17.206 unregister 0x200000400000 4194304 PASSED 00:08:17.206 free 0x200000a00000 4194304 00:08:17.206 unregister 0x200000800000 6291456 PASSED 00:08:17.206 malloc 8388608 00:08:17.206 register 0x200000400000 10485760 00:08:17.206 buf 0x200000600000 len 8388608 PASSED 00:08:17.206 free 0x200000600000 8388608 00:08:17.206 unregister 0x200000400000 10485760 PASSED 00:08:17.206 passed 00:08:17.206 00:08:17.206 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.206 suites 1 1 n/a 0 0 00:08:17.206 tests 1 1 1 0 0 00:08:17.206 asserts 15 15 15 0 n/a 00:08:17.206 00:08:17.206 Elapsed time = 0.010 seconds 00:08:17.206 00:08:17.206 real 0m0.098s 00:08:17.206 user 0m0.026s 00:08:17.206 sys 0m0.072s 00:08:17.206 20:35:45 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.206 20:35:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:17.206 ************************************ 00:08:17.206 END TEST env_mem_callbacks 00:08:17.206 ************************************ 00:08:17.206 00:08:17.206 real 0m7.562s 00:08:17.206 user 0m4.580s 00:08:17.206 sys 0m1.989s 00:08:17.206 20:35:45 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.206 20:35:45 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.206 ************************************ 00:08:17.206 END TEST env 00:08:17.206 ************************************ 00:08:17.206 20:35:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.206 20:35:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.206 20:35:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.206 20:35:45 -- common/autotest_common.sh@10 -- # set +x 00:08:17.206 ************************************ 00:08:17.206 START TEST rpc 00:08:17.206 ************************************ 00:08:17.206 20:35:45 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.517 * Looking for test storage... 00:08:17.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:17.517 20:35:45 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:17.517 20:35:45 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:17.517 20:35:45 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.517 20:35:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.517 20:35:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.517 20:35:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.517 20:35:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.517 20:35:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.517 20:35:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:17.517 20:35:46 rpc -- scripts/common.sh@345 -- # : 1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.517 20:35:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.517 20:35:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@353 -- # local d=1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.517 20:35:46 rpc -- scripts/common.sh@355 -- # echo 1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.517 20:35:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@353 -- # local d=2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.517 20:35:46 rpc -- scripts/common.sh@355 -- # echo 2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.517 20:35:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.517 20:35:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.517 20:35:46 rpc -- scripts/common.sh@368 -- # return 0 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:17.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.517 --rc genhtml_branch_coverage=1 00:08:17.517 --rc genhtml_function_coverage=1 00:08:17.517 --rc genhtml_legend=1 00:08:17.517 --rc geninfo_all_blocks=1 00:08:17.517 --rc geninfo_unexecuted_blocks=1 00:08:17.517 00:08:17.517 ' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:17.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.517 --rc genhtml_branch_coverage=1 00:08:17.517 --rc genhtml_function_coverage=1 00:08:17.517 --rc genhtml_legend=1 00:08:17.517 --rc geninfo_all_blocks=1 00:08:17.517 --rc geninfo_unexecuted_blocks=1 00:08:17.517 00:08:17.517 ' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:17.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.517 --rc genhtml_branch_coverage=1 00:08:17.517 --rc genhtml_function_coverage=1 00:08:17.517 --rc genhtml_legend=1 00:08:17.517 --rc geninfo_all_blocks=1 00:08:17.517 --rc geninfo_unexecuted_blocks=1 00:08:17.517 00:08:17.517 ' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:17.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.517 --rc genhtml_branch_coverage=1 00:08:17.517 --rc genhtml_function_coverage=1 00:08:17.517 --rc genhtml_legend=1 00:08:17.517 --rc geninfo_all_blocks=1 00:08:17.517 --rc geninfo_unexecuted_blocks=1 00:08:17.517 00:08:17.517 ' 00:08:17.517 20:35:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1582371 00:08:17.517 20:35:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:17.517 20:35:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.517 20:35:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1582371 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@831 -- # '[' -z 1582371 ']' 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.517 20:35:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.796 [2024-10-08 20:35:46.268798] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:17.796 [2024-10-08 20:35:46.268892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582371 ] 00:08:17.796 [2024-10-08 20:35:46.369892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.056 [2024-10-08 20:35:46.583117] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:18.056 [2024-10-08 20:35:46.583167] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1582371' to capture a snapshot of events at runtime. 00:08:18.056 [2024-10-08 20:35:46.583183] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.056 [2024-10-08 20:35:46.583197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.056 [2024-10-08 20:35:46.583209] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1582371 for offline analysis/debug. 00:08:18.056 [2024-10-08 20:35:46.583970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.995 20:35:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.995 20:35:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:18.995 20:35:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:18.995 20:35:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:18.995 20:35:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:18.995 20:35:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:18.995 20:35:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.995 20:35:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.995 20:35:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 ************************************ 00:08:18.995 START TEST rpc_integrity 00:08:18.995 ************************************ 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.995 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.995 { 00:08:18.995 "name": "Malloc0", 00:08:18.995 "aliases": [ 00:08:18.995 "d36155a2-a135-4577-9031-5609bf55ef33" 00:08:18.995 ], 00:08:18.995 "product_name": "Malloc disk", 00:08:18.995 "block_size": 512, 00:08:18.995 "num_blocks": 16384, 00:08:18.995 "uuid": "d36155a2-a135-4577-9031-5609bf55ef33", 00:08:18.995 "assigned_rate_limits": { 00:08:18.995 "rw_ios_per_sec": 0, 00:08:18.995 "rw_mbytes_per_sec": 0, 00:08:18.995 "r_mbytes_per_sec": 0, 00:08:18.995 "w_mbytes_per_sec": 0 00:08:18.995 }, 00:08:18.995 "claimed": false, 00:08:18.995 "zoned": false, 00:08:18.995 "supported_io_types": { 00:08:18.995 "read": true, 00:08:18.995 "write": true, 00:08:18.995 "unmap": true, 00:08:18.995 "flush": true, 00:08:18.995 "reset": true, 00:08:18.995 "nvme_admin": false, 00:08:18.995 "nvme_io": false, 00:08:18.995 "nvme_io_md": false, 00:08:18.995 "write_zeroes": true, 00:08:18.995 "zcopy": true, 00:08:18.995 "get_zone_info": false, 00:08:18.995 "zone_management": false, 00:08:18.995 "zone_append": false, 00:08:18.996 "compare": false, 00:08:18.996 "compare_and_write": false, 00:08:18.996 "abort": true, 00:08:18.996 "seek_hole": false, 00:08:18.996 "seek_data": false, 00:08:18.996 "copy": true, 00:08:18.996 "nvme_iov_md": false 00:08:18.996 }, 00:08:18.996 "memory_domains": [ 00:08:18.996 { 00:08:18.996 "dma_device_id": "system", 00:08:18.996 "dma_device_type": 1 00:08:18.996 }, 00:08:18.996 { 00:08:18.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.996 "dma_device_type": 2 00:08:18.996 } 00:08:18.996 ], 00:08:18.996 "driver_specific": {} 00:08:18.996 } 00:08:18.996 ]' 00:08:18.996 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:18.996 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.996 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:18.996 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.996 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.996 [2024-10-08 20:35:47.727735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:18.996 [2024-10-08 20:35:47.727781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.996 [2024-10-08 20:35:47.727807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb07b00 00:08:18.996 [2024-10-08 20:35:47.727823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.996 [2024-10-08 20:35:47.731057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.996 [2024-10-08 20:35:47.731119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.996 Passthru0 00:08:18.996 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.996 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.996 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.996 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.262 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.262 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:19.262 { 00:08:19.262 "name": "Malloc0", 00:08:19.262 "aliases": [ 00:08:19.262 "d36155a2-a135-4577-9031-5609bf55ef33" 00:08:19.262 ], 00:08:19.262 "product_name": "Malloc disk", 00:08:19.262 "block_size": 512, 00:08:19.262 "num_blocks": 16384, 00:08:19.262 "uuid": "d36155a2-a135-4577-9031-5609bf55ef33", 00:08:19.262 "assigned_rate_limits": { 00:08:19.262 "rw_ios_per_sec": 0, 00:08:19.262 "rw_mbytes_per_sec": 0, 00:08:19.262 "r_mbytes_per_sec": 0, 00:08:19.262 "w_mbytes_per_sec": 0 00:08:19.262 }, 00:08:19.262 "claimed": true, 00:08:19.262 "claim_type": "exclusive_write", 00:08:19.262 "zoned": false, 00:08:19.262 "supported_io_types": { 00:08:19.262 "read": true, 00:08:19.262 "write": true, 00:08:19.262 "unmap": true, 00:08:19.262 "flush": true, 00:08:19.262 "reset": true, 00:08:19.262 "nvme_admin": false, 00:08:19.262 "nvme_io": false, 00:08:19.262 "nvme_io_md": false, 00:08:19.262 "write_zeroes": true, 00:08:19.262 "zcopy": true, 00:08:19.262 "get_zone_info": false, 00:08:19.262 "zone_management": false, 00:08:19.262 "zone_append": false, 00:08:19.262 "compare": false, 00:08:19.262 "compare_and_write": false, 00:08:19.262 "abort": true, 00:08:19.262 "seek_hole": false, 00:08:19.262 "seek_data": false, 00:08:19.262 "copy": true, 00:08:19.262 "nvme_iov_md": false 00:08:19.262 }, 00:08:19.262 "memory_domains": [ 00:08:19.262 { 00:08:19.262 "dma_device_id": "system", 00:08:19.262 "dma_device_type": 1 00:08:19.262 }, 00:08:19.262 { 00:08:19.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.262 "dma_device_type": 2 00:08:19.262 } 00:08:19.262 ], 00:08:19.262 "driver_specific": {} 00:08:19.262 }, 00:08:19.262 { 00:08:19.262 "name": "Passthru0", 00:08:19.262 "aliases": [ 00:08:19.262 "f36cd8d6-15ba-5258-a087-fdc3f9179369" 00:08:19.262 ], 00:08:19.262 "product_name": "passthru", 00:08:19.262 "block_size": 512, 00:08:19.262 "num_blocks": 16384, 00:08:19.262 "uuid": "f36cd8d6-15ba-5258-a087-fdc3f9179369", 00:08:19.263 "assigned_rate_limits": { 00:08:19.263 "rw_ios_per_sec": 0, 00:08:19.263 "rw_mbytes_per_sec": 0, 00:08:19.263 "r_mbytes_per_sec": 0, 00:08:19.263 "w_mbytes_per_sec": 0 00:08:19.263 }, 00:08:19.263 "claimed": false, 00:08:19.263 "zoned": false, 00:08:19.263 "supported_io_types": { 00:08:19.263 "read": true, 00:08:19.263 "write": true, 00:08:19.263 "unmap": true, 00:08:19.263 "flush": true, 00:08:19.263 "reset": true, 00:08:19.263 "nvme_admin": false, 00:08:19.263 "nvme_io": false, 00:08:19.263 "nvme_io_md": false, 00:08:19.263 "write_zeroes": true, 00:08:19.263 "zcopy": true, 00:08:19.263 "get_zone_info": false, 00:08:19.263 "zone_management": false, 00:08:19.263 "zone_append": false, 00:08:19.263 "compare": false, 00:08:19.263 "compare_and_write": false, 00:08:19.263 "abort": true, 00:08:19.263 "seek_hole": false, 00:08:19.263 "seek_data": false, 00:08:19.263 "copy": true, 00:08:19.263 "nvme_iov_md": false 00:08:19.263 }, 00:08:19.263 "memory_domains": [ 00:08:19.263 { 00:08:19.263 "dma_device_id": "system", 00:08:19.263 "dma_device_type": 1 00:08:19.263 }, 00:08:19.263 { 00:08:19.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.263 "dma_device_type": 2 00:08:19.263 } 00:08:19.263 ], 00:08:19.263 "driver_specific": { 00:08:19.263 "passthru": { 00:08:19.263 "name": "Passthru0", 00:08:19.263 "base_bdev_name": "Malloc0" 00:08:19.263 } 00:08:19.263 } 00:08:19.263 } 00:08:19.263 ]' 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:19.263 20:35:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:19.263 00:08:19.263 real 0m0.431s 00:08:19.263 user 0m0.305s 00:08:19.263 sys 0m0.050s 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.263 20:35:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 ************************************ 00:08:19.263 END TEST rpc_integrity 00:08:19.263 ************************************ 00:08:19.263 20:35:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:19.263 20:35:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.263 20:35:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.263 20:35:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 ************************************ 00:08:19.263 START TEST rpc_plugins 00:08:19.263 ************************************ 00:08:19.263 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:19.263 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:19.263 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.263 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.263 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.263 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:19.263 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:19.263 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:19.526 { 00:08:19.526 "name": "Malloc1", 00:08:19.526 "aliases": [ 00:08:19.526 "9ee92829-745f-46e6-becf-7bb08ecbec78" 00:08:19.526 ], 00:08:19.526 "product_name": "Malloc disk", 00:08:19.526 "block_size": 4096, 00:08:19.526 "num_blocks": 256, 00:08:19.526 "uuid": "9ee92829-745f-46e6-becf-7bb08ecbec78", 00:08:19.526 "assigned_rate_limits": { 00:08:19.526 "rw_ios_per_sec": 0, 00:08:19.526 "rw_mbytes_per_sec": 0, 00:08:19.526 "r_mbytes_per_sec": 0, 00:08:19.526 "w_mbytes_per_sec": 0 00:08:19.526 }, 00:08:19.526 "claimed": false, 00:08:19.526 "zoned": false, 00:08:19.526 "supported_io_types": { 00:08:19.526 "read": true, 00:08:19.526 "write": true, 00:08:19.526 "unmap": true, 00:08:19.526 "flush": true, 00:08:19.526 "reset": true, 00:08:19.526 "nvme_admin": false, 00:08:19.526 "nvme_io": false, 00:08:19.526 "nvme_io_md": false, 00:08:19.526 "write_zeroes": true, 00:08:19.526 "zcopy": true, 00:08:19.526 "get_zone_info": false, 00:08:19.526 "zone_management": false, 00:08:19.526 "zone_append": false, 00:08:19.526 "compare": false, 00:08:19.526 "compare_and_write": false, 00:08:19.526 "abort": true, 00:08:19.526 "seek_hole": false, 00:08:19.526 "seek_data": false, 00:08:19.526 "copy": true, 00:08:19.526 "nvme_iov_md": false 00:08:19.526 }, 00:08:19.526 "memory_domains": [ 00:08:19.526 { 00:08:19.526 "dma_device_id": "system", 00:08:19.526 "dma_device_type": 1 00:08:19.526 }, 00:08:19.526 { 00:08:19.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.526 "dma_device_type": 2 00:08:19.526 } 00:08:19.526 ], 00:08:19.526 "driver_specific": {} 00:08:19.526 } 00:08:19.526 ]' 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:19.526 20:35:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:19.526 00:08:19.526 real 0m0.177s 00:08:19.526 user 0m0.123s 00:08:19.526 sys 0m0.021s 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.526 20:35:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 ************************************ 00:08:19.526 END TEST rpc_plugins 00:08:19.526 ************************************ 00:08:19.526 20:35:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:19.526 20:35:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.526 20:35:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.526 20:35:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 ************************************ 00:08:19.526 START TEST rpc_trace_cmd_test 00:08:19.526 ************************************ 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:19.526 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1582371", 00:08:19.526 "tpoint_group_mask": "0x8", 00:08:19.526 "iscsi_conn": { 00:08:19.526 "mask": "0x2", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "scsi": { 00:08:19.526 "mask": "0x4", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "bdev": { 00:08:19.526 "mask": "0x8", 00:08:19.526 "tpoint_mask": "0xffffffffffffffff" 00:08:19.526 }, 00:08:19.526 "nvmf_rdma": { 00:08:19.526 "mask": "0x10", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "nvmf_tcp": { 00:08:19.526 "mask": "0x20", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "ftl": { 00:08:19.526 "mask": "0x40", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "blobfs": { 00:08:19.526 "mask": "0x80", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "dsa": { 00:08:19.526 "mask": "0x200", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "thread": { 00:08:19.526 "mask": "0x400", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "nvme_pcie": { 00:08:19.526 "mask": "0x800", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "iaa": { 00:08:19.526 "mask": "0x1000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "nvme_tcp": { 00:08:19.526 "mask": "0x2000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "bdev_nvme": { 00:08:19.526 "mask": "0x4000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "sock": { 00:08:19.526 "mask": "0x8000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "blob": { 00:08:19.526 "mask": "0x10000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "bdev_raid": { 00:08:19.526 "mask": "0x20000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 }, 00:08:19.526 "scheduler": { 00:08:19.526 "mask": "0x40000", 00:08:19.526 "tpoint_mask": "0x0" 00:08:19.526 } 00:08:19.526 }' 00:08:19.526 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:19.791 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:20.052 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:20.052 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:20.052 20:35:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:20.052 00:08:20.052 real 0m0.336s 00:08:20.052 user 0m0.293s 00:08:20.052 sys 0m0.031s 00:08:20.052 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.052 20:35:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.052 ************************************ 00:08:20.052 END TEST rpc_trace_cmd_test 00:08:20.052 ************************************ 00:08:20.052 20:35:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:20.052 20:35:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:20.052 20:35:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:20.052 20:35:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.052 20:35:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.052 20:35:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.052 ************************************ 00:08:20.052 START TEST rpc_daemon_integrity 00:08:20.052 ************************************ 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:20.052 { 00:08:20.052 "name": "Malloc2", 00:08:20.052 "aliases": [ 00:08:20.052 "964ea940-e7f6-4310-979a-9ad6516c6e19" 00:08:20.052 ], 00:08:20.052 "product_name": "Malloc disk", 00:08:20.052 "block_size": 512, 00:08:20.052 "num_blocks": 16384, 00:08:20.052 "uuid": "964ea940-e7f6-4310-979a-9ad6516c6e19", 00:08:20.052 "assigned_rate_limits": { 00:08:20.052 "rw_ios_per_sec": 0, 00:08:20.052 "rw_mbytes_per_sec": 0, 00:08:20.052 "r_mbytes_per_sec": 0, 00:08:20.052 "w_mbytes_per_sec": 0 00:08:20.052 }, 00:08:20.052 "claimed": false, 00:08:20.052 "zoned": false, 00:08:20.052 "supported_io_types": { 00:08:20.052 "read": true, 00:08:20.052 "write": true, 00:08:20.052 "unmap": true, 00:08:20.052 "flush": true, 00:08:20.052 "reset": true, 00:08:20.052 "nvme_admin": false, 00:08:20.052 "nvme_io": false, 00:08:20.052 "nvme_io_md": false, 00:08:20.052 "write_zeroes": true, 00:08:20.052 "zcopy": true, 00:08:20.052 "get_zone_info": false, 00:08:20.052 "zone_management": false, 00:08:20.052 "zone_append": false, 00:08:20.052 "compare": false, 00:08:20.052 "compare_and_write": false, 00:08:20.052 "abort": true, 00:08:20.052 "seek_hole": false, 00:08:20.052 "seek_data": false, 00:08:20.052 "copy": true, 00:08:20.052 "nvme_iov_md": false 00:08:20.052 }, 00:08:20.052 "memory_domains": [ 00:08:20.052 { 00:08:20.052 "dma_device_id": "system", 00:08:20.052 "dma_device_type": 1 00:08:20.052 }, 00:08:20.052 { 00:08:20.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.052 "dma_device_type": 2 00:08:20.052 } 00:08:20.052 ], 00:08:20.052 "driver_specific": {} 00:08:20.052 } 00:08:20.052 ]' 00:08:20.052 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 [2024-10-08 20:35:48.832073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:20.311 [2024-10-08 20:35:48.832168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.311 [2024-10-08 20:35:48.832232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb07d30 00:08:20.311 [2024-10-08 20:35:48.832252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.311 [2024-10-08 20:35:48.834513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.311 [2024-10-08 20:35:48.834576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:20.311 Passthru0 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:20.311 { 00:08:20.311 "name": "Malloc2", 00:08:20.311 "aliases": [ 00:08:20.311 "964ea940-e7f6-4310-979a-9ad6516c6e19" 00:08:20.311 ], 00:08:20.311 "product_name": "Malloc disk", 00:08:20.311 "block_size": 512, 00:08:20.311 "num_blocks": 16384, 00:08:20.311 "uuid": "964ea940-e7f6-4310-979a-9ad6516c6e19", 00:08:20.311 "assigned_rate_limits": { 00:08:20.311 "rw_ios_per_sec": 0, 00:08:20.311 "rw_mbytes_per_sec": 0, 00:08:20.311 "r_mbytes_per_sec": 0, 00:08:20.311 "w_mbytes_per_sec": 0 00:08:20.311 }, 00:08:20.311 "claimed": true, 00:08:20.311 "claim_type": "exclusive_write", 00:08:20.311 "zoned": false, 00:08:20.311 "supported_io_types": { 00:08:20.311 "read": true, 00:08:20.311 "write": true, 00:08:20.311 "unmap": true, 00:08:20.311 "flush": true, 00:08:20.311 "reset": true, 00:08:20.311 "nvme_admin": false, 00:08:20.311 "nvme_io": false, 00:08:20.311 "nvme_io_md": false, 00:08:20.311 "write_zeroes": true, 00:08:20.311 "zcopy": true, 00:08:20.311 "get_zone_info": false, 00:08:20.311 "zone_management": false, 00:08:20.311 "zone_append": false, 00:08:20.311 "compare": false, 00:08:20.311 "compare_and_write": false, 00:08:20.311 "abort": true, 00:08:20.311 "seek_hole": false, 00:08:20.311 "seek_data": false, 00:08:20.311 "copy": true, 00:08:20.311 "nvme_iov_md": false 00:08:20.311 }, 00:08:20.311 "memory_domains": [ 00:08:20.311 { 00:08:20.311 "dma_device_id": "system", 00:08:20.311 "dma_device_type": 1 00:08:20.311 }, 00:08:20.311 { 00:08:20.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.311 "dma_device_type": 2 00:08:20.311 } 00:08:20.311 ], 00:08:20.311 "driver_specific": {} 00:08:20.311 }, 00:08:20.311 { 00:08:20.311 "name": "Passthru0", 00:08:20.311 "aliases": [ 00:08:20.311 "18b5a5bd-fc8a-5c9c-81a7-a29e98fd66f7" 00:08:20.311 ], 00:08:20.311 "product_name": "passthru", 00:08:20.311 "block_size": 512, 00:08:20.311 "num_blocks": 16384, 00:08:20.311 "uuid": "18b5a5bd-fc8a-5c9c-81a7-a29e98fd66f7", 00:08:20.311 "assigned_rate_limits": { 00:08:20.311 "rw_ios_per_sec": 0, 00:08:20.311 "rw_mbytes_per_sec": 0, 00:08:20.311 "r_mbytes_per_sec": 0, 00:08:20.311 "w_mbytes_per_sec": 0 00:08:20.311 }, 00:08:20.311 "claimed": false, 00:08:20.311 "zoned": false, 00:08:20.311 "supported_io_types": { 00:08:20.311 "read": true, 00:08:20.311 "write": true, 00:08:20.311 "unmap": true, 00:08:20.311 "flush": true, 00:08:20.311 "reset": true, 00:08:20.311 "nvme_admin": false, 00:08:20.311 "nvme_io": false, 00:08:20.311 "nvme_io_md": false, 00:08:20.311 "write_zeroes": true, 00:08:20.311 "zcopy": true, 00:08:20.311 "get_zone_info": false, 00:08:20.311 "zone_management": false, 00:08:20.311 "zone_append": false, 00:08:20.311 "compare": false, 00:08:20.311 "compare_and_write": false, 00:08:20.311 "abort": true, 00:08:20.311 "seek_hole": false, 00:08:20.311 "seek_data": false, 00:08:20.311 "copy": true, 00:08:20.311 "nvme_iov_md": false 00:08:20.311 }, 00:08:20.311 "memory_domains": [ 00:08:20.311 { 00:08:20.311 "dma_device_id": "system", 00:08:20.311 "dma_device_type": 1 00:08:20.311 }, 00:08:20.311 { 00:08:20.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.311 "dma_device_type": 2 00:08:20.311 } 00:08:20.311 ], 00:08:20.311 "driver_specific": { 00:08:20.311 "passthru": { 00:08:20.311 "name": "Passthru0", 00:08:20.311 "base_bdev_name": "Malloc2" 00:08:20.311 } 00:08:20.311 } 00:08:20.311 } 00:08:20.311 ]' 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:20.311 20:35:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:20.311 20:35:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:20.311 00:08:20.311 real 0m0.414s 00:08:20.311 user 0m0.305s 00:08:20.311 sys 0m0.041s 00:08:20.311 20:35:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.311 20:35:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.311 ************************************ 00:08:20.311 END TEST rpc_daemon_integrity 00:08:20.311 ************************************ 00:08:20.569 20:35:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:20.569 20:35:49 rpc -- rpc/rpc.sh@84 -- # killprocess 1582371 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 1582371 ']' 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@954 -- # kill -0 1582371 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@955 -- # uname 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1582371 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1582371' 00:08:20.569 killing process with pid 1582371 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@969 -- # kill 1582371 00:08:20.569 20:35:49 rpc -- common/autotest_common.sh@974 -- # wait 1582371 00:08:21.137 00:08:21.137 real 0m3.898s 00:08:21.137 user 0m5.070s 00:08:21.137 sys 0m1.060s 00:08:21.137 20:35:49 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.137 20:35:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.137 ************************************ 00:08:21.137 END TEST rpc 00:08:21.137 ************************************ 00:08:21.137 20:35:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:21.137 20:35:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.137 20:35:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.137 20:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:21.397 ************************************ 00:08:21.397 START TEST skip_rpc 00:08:21.397 ************************************ 00:08:21.397 20:35:49 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:21.397 * Looking for test storage... 00:08:21.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:21.397 20:35:49 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:21.397 20:35:49 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:21.397 20:35:49 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.397 20:35:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:21.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.397 --rc genhtml_branch_coverage=1 00:08:21.397 --rc genhtml_function_coverage=1 00:08:21.397 --rc genhtml_legend=1 00:08:21.397 --rc geninfo_all_blocks=1 00:08:21.397 --rc geninfo_unexecuted_blocks=1 00:08:21.397 00:08:21.397 ' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:21.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.397 --rc genhtml_branch_coverage=1 00:08:21.397 --rc genhtml_function_coverage=1 00:08:21.397 --rc genhtml_legend=1 00:08:21.397 --rc geninfo_all_blocks=1 00:08:21.397 --rc geninfo_unexecuted_blocks=1 00:08:21.397 00:08:21.397 ' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:21.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.397 --rc genhtml_branch_coverage=1 00:08:21.397 --rc genhtml_function_coverage=1 00:08:21.397 --rc genhtml_legend=1 00:08:21.397 --rc geninfo_all_blocks=1 00:08:21.397 --rc geninfo_unexecuted_blocks=1 00:08:21.397 00:08:21.397 ' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:21.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.397 --rc genhtml_branch_coverage=1 00:08:21.397 --rc genhtml_function_coverage=1 00:08:21.397 --rc genhtml_legend=1 00:08:21.397 --rc geninfo_all_blocks=1 00:08:21.397 --rc geninfo_unexecuted_blocks=1 00:08:21.397 00:08:21.397 ' 00:08:21.397 20:35:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:21.397 20:35:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:21.397 20:35:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.397 20:35:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.397 ************************************ 00:08:21.397 START TEST skip_rpc 00:08:21.397 ************************************ 00:08:21.397 20:35:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:21.397 20:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1583040 00:08:21.397 20:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:21.397 20:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.397 20:35:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:21.658 [2024-10-08 20:35:50.245921] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:21.658 [2024-10-08 20:35:50.246099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583040 ] 00:08:21.658 [2024-10-08 20:35:50.387713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.917 [2024-10-08 20:35:50.593361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1583040 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1583040 ']' 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1583040 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583040 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583040' 00:08:27.201 killing process with pid 1583040 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1583040 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1583040 00:08:27.201 00:08:27.201 real 0m5.743s 00:08:27.201 user 0m5.196s 00:08:27.201 sys 0m0.578s 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.201 20:35:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.201 ************************************ 00:08:27.201 END TEST skip_rpc 00:08:27.201 ************************************ 00:08:27.201 20:35:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:27.201 20:35:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.201 20:35:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.201 20:35:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.201 ************************************ 00:08:27.201 START TEST skip_rpc_with_json 00:08:27.201 ************************************ 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1583717 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1583717 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1583717 ']' 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.201 20:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.462 [2024-10-08 20:35:55.990003] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:27.462 [2024-10-08 20:35:55.990124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583717 ] 00:08:27.462 [2024-10-08 20:35:56.103252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.723 [2024-10-08 20:35:56.318940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 [2024-10-08 20:35:56.800915] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:28.292 request: 00:08:28.292 { 00:08:28.292 "trtype": "tcp", 00:08:28.292 "method": "nvmf_get_transports", 00:08:28.292 "req_id": 1 00:08:28.292 } 00:08:28.292 Got JSON-RPC error response 00:08:28.292 response: 00:08:28.292 { 00:08:28.292 "code": -19, 00:08:28.292 "message": "No such device" 00:08:28.292 } 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 [2024-10-08 20:35:56.812995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.292 20:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.292 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:28.292 { 00:08:28.292 "subsystems": [ 00:08:28.292 { 00:08:28.292 "subsystem": "fsdev", 00:08:28.292 "config": [ 00:08:28.292 { 00:08:28.292 "method": "fsdev_set_opts", 00:08:28.292 "params": { 00:08:28.292 "fsdev_io_pool_size": 65535, 00:08:28.292 "fsdev_io_cache_size": 256 00:08:28.292 } 00:08:28.292 } 00:08:28.292 ] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "vfio_user_target", 00:08:28.292 "config": null 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "keyring", 00:08:28.292 "config": [] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "iobuf", 00:08:28.292 "config": [ 00:08:28.292 { 00:08:28.292 "method": "iobuf_set_options", 00:08:28.292 "params": { 00:08:28.292 "small_pool_count": 8192, 00:08:28.292 "large_pool_count": 1024, 00:08:28.292 "small_bufsize": 8192, 00:08:28.292 "large_bufsize": 135168 00:08:28.292 } 00:08:28.292 } 00:08:28.292 ] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "sock", 00:08:28.292 "config": [ 00:08:28.292 { 00:08:28.292 "method": "sock_set_default_impl", 00:08:28.292 "params": { 00:08:28.292 "impl_name": "posix" 00:08:28.292 } 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "method": "sock_impl_set_options", 00:08:28.292 "params": { 00:08:28.292 "impl_name": "ssl", 00:08:28.292 "recv_buf_size": 4096, 00:08:28.292 "send_buf_size": 4096, 00:08:28.292 "enable_recv_pipe": true, 00:08:28.292 "enable_quickack": false, 00:08:28.292 "enable_placement_id": 0, 00:08:28.292 "enable_zerocopy_send_server": true, 00:08:28.292 "enable_zerocopy_send_client": false, 00:08:28.292 "zerocopy_threshold": 0, 00:08:28.292 "tls_version": 0, 00:08:28.292 "enable_ktls": false 00:08:28.292 } 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "method": "sock_impl_set_options", 00:08:28.292 "params": { 00:08:28.292 "impl_name": "posix", 00:08:28.292 "recv_buf_size": 2097152, 00:08:28.292 "send_buf_size": 2097152, 00:08:28.292 "enable_recv_pipe": true, 00:08:28.292 "enable_quickack": false, 00:08:28.292 "enable_placement_id": 0, 00:08:28.292 "enable_zerocopy_send_server": true, 00:08:28.292 "enable_zerocopy_send_client": false, 00:08:28.292 "zerocopy_threshold": 0, 00:08:28.292 "tls_version": 0, 00:08:28.292 "enable_ktls": false 00:08:28.292 } 00:08:28.292 } 00:08:28.292 ] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "vmd", 00:08:28.292 "config": [] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "accel", 00:08:28.292 "config": [ 00:08:28.292 { 00:08:28.292 "method": "accel_set_options", 00:08:28.292 "params": { 00:08:28.292 "small_cache_size": 128, 00:08:28.292 "large_cache_size": 16, 00:08:28.292 "task_count": 2048, 00:08:28.292 "sequence_count": 2048, 00:08:28.292 "buf_count": 2048 00:08:28.292 } 00:08:28.292 } 00:08:28.292 ] 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "subsystem": "bdev", 00:08:28.292 "config": [ 00:08:28.292 { 00:08:28.292 "method": "bdev_set_options", 00:08:28.292 "params": { 00:08:28.292 "bdev_io_pool_size": 65535, 00:08:28.292 "bdev_io_cache_size": 256, 00:08:28.292 "bdev_auto_examine": true, 00:08:28.292 "iobuf_small_cache_size": 128, 00:08:28.292 "iobuf_large_cache_size": 16 00:08:28.292 } 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "method": "bdev_raid_set_options", 00:08:28.292 "params": { 00:08:28.292 "process_window_size_kb": 1024, 00:08:28.292 "process_max_bandwidth_mb_sec": 0 00:08:28.292 } 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "method": "bdev_iscsi_set_options", 00:08:28.292 "params": { 00:08:28.292 "timeout_sec": 30 00:08:28.292 } 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "method": "bdev_nvme_set_options", 00:08:28.292 "params": { 00:08:28.292 "action_on_timeout": "none", 00:08:28.292 "timeout_us": 0, 00:08:28.292 "timeout_admin_us": 0, 00:08:28.292 "keep_alive_timeout_ms": 10000, 00:08:28.292 "arbitration_burst": 0, 00:08:28.292 "low_priority_weight": 0, 00:08:28.292 "medium_priority_weight": 0, 00:08:28.292 "high_priority_weight": 0, 00:08:28.292 "nvme_adminq_poll_period_us": 10000, 00:08:28.292 "nvme_ioq_poll_period_us": 0, 00:08:28.292 "io_queue_requests": 0, 00:08:28.292 "delay_cmd_submit": true, 00:08:28.292 "transport_retry_count": 4, 00:08:28.292 "bdev_retry_count": 3, 00:08:28.292 "transport_ack_timeout": 0, 00:08:28.292 "ctrlr_loss_timeout_sec": 0, 00:08:28.292 "reconnect_delay_sec": 0, 00:08:28.292 "fast_io_fail_timeout_sec": 0, 00:08:28.292 "disable_auto_failback": false, 00:08:28.292 "generate_uuids": false, 00:08:28.292 "transport_tos": 0, 00:08:28.292 "nvme_error_stat": false, 00:08:28.292 "rdma_srq_size": 0, 00:08:28.292 "io_path_stat": false, 00:08:28.292 "allow_accel_sequence": false, 00:08:28.293 "rdma_max_cq_size": 0, 00:08:28.293 "rdma_cm_event_timeout_ms": 0, 00:08:28.293 "dhchap_digests": [ 00:08:28.293 "sha256", 00:08:28.293 "sha384", 00:08:28.293 "sha512" 00:08:28.293 ], 00:08:28.293 "dhchap_dhgroups": [ 00:08:28.293 "null", 00:08:28.293 "ffdhe2048", 00:08:28.293 "ffdhe3072", 00:08:28.293 "ffdhe4096", 00:08:28.293 "ffdhe6144", 00:08:28.293 "ffdhe8192" 00:08:28.293 ] 00:08:28.293 } 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "method": "bdev_nvme_set_hotplug", 00:08:28.293 "params": { 00:08:28.293 "period_us": 100000, 00:08:28.293 "enable": false 00:08:28.293 } 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "method": "bdev_wait_for_examine" 00:08:28.293 } 00:08:28.293 ] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "scsi", 00:08:28.293 "config": null 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "scheduler", 00:08:28.293 "config": [ 00:08:28.293 { 00:08:28.293 "method": "framework_set_scheduler", 00:08:28.293 "params": { 00:08:28.293 "name": "static" 00:08:28.293 } 00:08:28.293 } 00:08:28.293 ] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "vhost_scsi", 00:08:28.293 "config": [] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "vhost_blk", 00:08:28.293 "config": [] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "ublk", 00:08:28.293 "config": [] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "nbd", 00:08:28.293 "config": [] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "nvmf", 00:08:28.293 "config": [ 00:08:28.293 { 00:08:28.293 "method": "nvmf_set_config", 00:08:28.293 "params": { 00:08:28.293 "discovery_filter": "match_any", 00:08:28.293 "admin_cmd_passthru": { 00:08:28.293 "identify_ctrlr": false 00:08:28.293 }, 00:08:28.293 "dhchap_digests": [ 00:08:28.293 "sha256", 00:08:28.293 "sha384", 00:08:28.293 "sha512" 00:08:28.293 ], 00:08:28.293 "dhchap_dhgroups": [ 00:08:28.293 "null", 00:08:28.293 "ffdhe2048", 00:08:28.293 "ffdhe3072", 00:08:28.293 "ffdhe4096", 00:08:28.293 "ffdhe6144", 00:08:28.293 "ffdhe8192" 00:08:28.293 ] 00:08:28.293 } 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "method": "nvmf_set_max_subsystems", 00:08:28.293 "params": { 00:08:28.293 "max_subsystems": 1024 00:08:28.293 } 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "method": "nvmf_set_crdt", 00:08:28.293 "params": { 00:08:28.293 "crdt1": 0, 00:08:28.293 "crdt2": 0, 00:08:28.293 "crdt3": 0 00:08:28.293 } 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "method": "nvmf_create_transport", 00:08:28.293 "params": { 00:08:28.293 "trtype": "TCP", 00:08:28.293 "max_queue_depth": 128, 00:08:28.293 "max_io_qpairs_per_ctrlr": 127, 00:08:28.293 "in_capsule_data_size": 4096, 00:08:28.293 "max_io_size": 131072, 00:08:28.293 "io_unit_size": 131072, 00:08:28.293 "max_aq_depth": 128, 00:08:28.293 "num_shared_buffers": 511, 00:08:28.293 "buf_cache_size": 4294967295, 00:08:28.293 "dif_insert_or_strip": false, 00:08:28.293 "zcopy": false, 00:08:28.293 "c2h_success": true, 00:08:28.293 "sock_priority": 0, 00:08:28.293 "abort_timeout_sec": 1, 00:08:28.293 "ack_timeout": 0, 00:08:28.293 "data_wr_pool_size": 0 00:08:28.293 } 00:08:28.293 } 00:08:28.293 ] 00:08:28.293 }, 00:08:28.293 { 00:08:28.293 "subsystem": "iscsi", 00:08:28.293 "config": [ 00:08:28.293 { 00:08:28.293 "method": "iscsi_set_options", 00:08:28.293 "params": { 00:08:28.293 "node_base": "iqn.2016-06.io.spdk", 00:08:28.293 "max_sessions": 128, 00:08:28.293 "max_connections_per_session": 2, 00:08:28.293 "max_queue_depth": 64, 00:08:28.293 "default_time2wait": 2, 00:08:28.293 "default_time2retain": 20, 00:08:28.293 "first_burst_length": 8192, 00:08:28.293 "immediate_data": true, 00:08:28.293 "allow_duplicated_isid": false, 00:08:28.293 "error_recovery_level": 0, 00:08:28.293 "nop_timeout": 60, 00:08:28.293 "nop_in_interval": 30, 00:08:28.293 "disable_chap": false, 00:08:28.293 "require_chap": false, 00:08:28.293 "mutual_chap": false, 00:08:28.293 "chap_group": 0, 00:08:28.293 "max_large_datain_per_connection": 64, 00:08:28.293 "max_r2t_per_connection": 4, 00:08:28.293 "pdu_pool_size": 36864, 00:08:28.293 "immediate_data_pool_size": 16384, 00:08:28.293 "data_out_pool_size": 2048 00:08:28.293 } 00:08:28.293 } 00:08:28.293 ] 00:08:28.293 } 00:08:28.293 ] 00:08:28.293 } 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1583717 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1583717 ']' 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1583717 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.293 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583717 00:08:28.553 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.553 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.553 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583717' 00:08:28.553 killing process with pid 1583717 00:08:28.553 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1583717 00:08:28.553 20:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1583717 00:08:29.122 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1583926 00:08:29.122 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:29.122 20:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:34.409 20:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1583926 00:08:34.409 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1583926 ']' 00:08:34.409 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1583926 00:08:34.409 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:34.409 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583926 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583926' 00:08:34.410 killing process with pid 1583926 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1583926 00:08:34.410 20:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1583926 00:08:34.669 20:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:34.669 20:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:34.669 00:08:34.669 real 0m7.510s 00:08:34.669 user 0m6.897s 00:08:34.669 sys 0m1.286s 00:08:34.669 20:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.669 20:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:34.669 ************************************ 00:08:34.669 END TEST skip_rpc_with_json 00:08:34.669 ************************************ 00:08:34.929 20:36:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:34.929 20:36:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.929 20:36:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.929 20:36:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.929 ************************************ 00:08:34.929 START TEST skip_rpc_with_delay 00:08:34.929 ************************************ 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:34.929 [2024-10-08 20:36:03.614161] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:34.929 [2024-10-08 20:36:03.614424] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.929 00:08:34.929 real 0m0.162s 00:08:34.929 user 0m0.103s 00:08:34.929 sys 0m0.056s 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.929 20:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:34.929 ************************************ 00:08:34.929 END TEST skip_rpc_with_delay 00:08:34.929 ************************************ 00:08:34.929 20:36:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:35.189 20:36:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:35.189 20:36:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:35.189 20:36:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.189 20:36:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.189 20:36:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 ************************************ 00:08:35.189 START TEST exit_on_failed_rpc_init 00:08:35.189 ************************************ 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1584751 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1584751 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1584751 ']' 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.189 20:36:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 [2024-10-08 20:36:03.853217] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:35.189 [2024-10-08 20:36:03.853398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584751 ] 00:08:35.450 [2024-10-08 20:36:04.002460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.710 [2024-10-08 20:36:04.231706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:35.969 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:35.970 20:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.230 [2024-10-08 20:36:04.826516] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:36.230 [2024-10-08 20:36:04.826723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584886 ] 00:08:36.230 [2024-10-08 20:36:04.963877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.490 [2024-10-08 20:36:05.189791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.490 [2024-10-08 20:36:05.190013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:36.490 [2024-10-08 20:36:05.190065] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:36.490 [2024-10-08 20:36:05.190097] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1584751 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1584751 ']' 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1584751 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1584751 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1584751' 00:08:36.748 killing process with pid 1584751 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1584751 00:08:36.748 20:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1584751 00:08:37.685 00:08:37.685 real 0m2.393s 00:08:37.685 user 0m2.906s 00:08:37.685 sys 0m0.872s 00:08:37.685 20:36:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.685 20:36:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:37.685 ************************************ 00:08:37.685 END TEST exit_on_failed_rpc_init 00:08:37.685 ************************************ 00:08:37.685 20:36:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:37.685 00:08:37.685 real 0m16.262s 00:08:37.685 user 0m15.320s 00:08:37.685 sys 0m3.057s 00:08:37.685 20:36:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.685 20:36:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.685 ************************************ 00:08:37.685 END TEST skip_rpc 00:08:37.685 ************************************ 00:08:37.685 20:36:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:37.685 20:36:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.685 20:36:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.685 20:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:37.685 ************************************ 00:08:37.685 START TEST rpc_client 00:08:37.685 ************************************ 00:08:37.685 20:36:06 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:37.685 * Looking for test storage... 00:08:37.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:37.685 20:36:06 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:37.685 20:36:06 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:08:37.685 20:36:06 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.945 20:36:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.945 --rc genhtml_branch_coverage=1 00:08:37.945 --rc genhtml_function_coverage=1 00:08:37.945 --rc genhtml_legend=1 00:08:37.945 --rc geninfo_all_blocks=1 00:08:37.945 --rc geninfo_unexecuted_blocks=1 00:08:37.945 00:08:37.945 ' 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.945 --rc genhtml_branch_coverage=1 00:08:37.945 --rc genhtml_function_coverage=1 00:08:37.945 --rc genhtml_legend=1 00:08:37.945 --rc geninfo_all_blocks=1 00:08:37.945 --rc geninfo_unexecuted_blocks=1 00:08:37.945 00:08:37.945 ' 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.945 --rc genhtml_branch_coverage=1 00:08:37.945 --rc genhtml_function_coverage=1 00:08:37.945 --rc genhtml_legend=1 00:08:37.945 --rc geninfo_all_blocks=1 00:08:37.945 --rc geninfo_unexecuted_blocks=1 00:08:37.945 00:08:37.945 ' 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.945 --rc genhtml_branch_coverage=1 00:08:37.945 --rc genhtml_function_coverage=1 00:08:37.945 --rc genhtml_legend=1 00:08:37.945 --rc geninfo_all_blocks=1 00:08:37.945 --rc geninfo_unexecuted_blocks=1 00:08:37.945 00:08:37.945 ' 00:08:37.945 20:36:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:37.945 OK 00:08:37.945 20:36:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:37.945 00:08:37.945 real 0m0.284s 00:08:37.945 user 0m0.191s 00:08:37.945 sys 0m0.103s 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.945 20:36:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 ************************************ 00:08:37.945 END TEST rpc_client 00:08:37.945 ************************************ 00:08:37.945 20:36:06 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:37.945 20:36:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.945 20:36:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.945 20:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 ************************************ 00:08:37.945 START TEST json_config 00:08:37.945 ************************************ 00:08:37.945 20:36:06 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:37.945 20:36:06 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:37.945 20:36:06 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:08:37.945 20:36:06 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.205 20:36:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.205 20:36:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.205 20:36:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.205 20:36:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.205 20:36:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.205 20:36:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:38.205 20:36:06 json_config -- scripts/common.sh@345 -- # : 1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.205 20:36:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.205 20:36:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@353 -- # local d=1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.205 20:36:06 json_config -- scripts/common.sh@355 -- # echo 1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.205 20:36:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@353 -- # local d=2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.205 20:36:06 json_config -- scripts/common.sh@355 -- # echo 2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.205 20:36:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.205 20:36:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.205 20:36:06 json_config -- scripts/common.sh@368 -- # return 0 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.205 --rc genhtml_branch_coverage=1 00:08:38.205 --rc genhtml_function_coverage=1 00:08:38.205 --rc genhtml_legend=1 00:08:38.205 --rc geninfo_all_blocks=1 00:08:38.205 --rc geninfo_unexecuted_blocks=1 00:08:38.205 00:08:38.205 ' 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.205 --rc genhtml_branch_coverage=1 00:08:38.205 --rc genhtml_function_coverage=1 00:08:38.205 --rc genhtml_legend=1 00:08:38.205 --rc geninfo_all_blocks=1 00:08:38.205 --rc geninfo_unexecuted_blocks=1 00:08:38.205 00:08:38.205 ' 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.205 --rc genhtml_branch_coverage=1 00:08:38.205 --rc genhtml_function_coverage=1 00:08:38.205 --rc genhtml_legend=1 00:08:38.205 --rc geninfo_all_blocks=1 00:08:38.205 --rc geninfo_unexecuted_blocks=1 00:08:38.205 00:08:38.205 ' 00:08:38.205 20:36:06 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.205 --rc genhtml_branch_coverage=1 00:08:38.205 --rc genhtml_function_coverage=1 00:08:38.205 --rc genhtml_legend=1 00:08:38.205 --rc geninfo_all_blocks=1 00:08:38.205 --rc geninfo_unexecuted_blocks=1 00:08:38.205 00:08:38.205 ' 00:08:38.205 20:36:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.205 20:36:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.205 20:36:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.205 20:36:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.205 20:36:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.205 20:36:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.205 20:36:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.205 20:36:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.205 20:36:06 json_config -- paths/export.sh@5 -- # export PATH 00:08:38.205 20:36:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.205 20:36:06 json_config -- nvmf/common.sh@51 -- # : 0 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.206 20:36:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:38.206 INFO: JSON configuration test init 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.206 20:36:06 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:38.206 20:36:06 json_config -- json_config/common.sh@9 -- # local app=target 00:08:38.206 20:36:06 json_config -- json_config/common.sh@10 -- # shift 00:08:38.206 20:36:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:38.206 20:36:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:38.206 20:36:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:38.206 20:36:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.206 20:36:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.206 20:36:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1585270 00:08:38.206 20:36:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:38.206 20:36:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:38.206 Waiting for target to run... 00:08:38.206 20:36:06 json_config -- json_config/common.sh@25 -- # waitforlisten 1585270 /var/tmp/spdk_tgt.sock 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@831 -- # '[' -z 1585270 ']' 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:38.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.206 20:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.206 [2024-10-08 20:36:06.853130] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:38.206 [2024-10-08 20:36:06.853254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585270 ] 00:08:38.774 [2024-10-08 20:36:07.348889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.774 [2024-10-08 20:36:07.516876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:39.717 20:36:08 json_config -- json_config/common.sh@26 -- # echo '' 00:08:39.717 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.717 20:36:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:39.717 20:36:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:39.717 20:36:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:43.014 20:36:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.014 20:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:43.014 20:36:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:43.014 20:36:11 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@54 -- # sort 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:43.274 20:36:12 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:43.274 20:36:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.274 20:36:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:43.534 20:36:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.534 20:36:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:43.534 20:36:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:43.534 20:36:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:44.104 MallocForNvmf0 00:08:44.104 20:36:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:44.104 20:36:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:44.363 MallocForNvmf1 00:08:44.363 20:36:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:44.363 20:36:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:44.624 [2024-10-08 20:36:13.360752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.884 20:36:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:44.884 20:36:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.144 20:36:13 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:45.144 20:36:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:45.713 20:36:14 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:45.713 20:36:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:46.652 20:36:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:46.652 20:36:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:46.912 [2024-10-08 20:36:15.665323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:47.172 20:36:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:47.172 20:36:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.172 20:36:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.172 20:36:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:47.172 20:36:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.172 20:36:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.172 20:36:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:47.172 20:36:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:47.172 20:36:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:47.432 MallocBdevForConfigChangeCheck 00:08:47.433 20:36:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:47.433 20:36:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.433 20:36:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.433 20:36:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:47.433 20:36:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:48.003 20:36:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:48.003 INFO: shutting down applications... 00:08:48.003 20:36:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:48.003 20:36:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:48.003 20:36:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:48.003 20:36:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:49.927 Calling clear_iscsi_subsystem 00:08:49.927 Calling clear_nvmf_subsystem 00:08:49.927 Calling clear_nbd_subsystem 00:08:49.927 Calling clear_ublk_subsystem 00:08:49.927 Calling clear_vhost_blk_subsystem 00:08:49.927 Calling clear_vhost_scsi_subsystem 00:08:49.927 Calling clear_bdev_subsystem 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:49.927 20:36:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:50.215 20:36:18 json_config -- json_config/json_config.sh@352 -- # break 00:08:50.215 20:36:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:50.215 20:36:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:50.215 20:36:18 json_config -- json_config/common.sh@31 -- # local app=target 00:08:50.215 20:36:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:50.215 20:36:18 json_config -- json_config/common.sh@35 -- # [[ -n 1585270 ]] 00:08:50.215 20:36:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1585270 00:08:50.215 20:36:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:50.215 20:36:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:50.215 20:36:18 json_config -- json_config/common.sh@41 -- # kill -0 1585270 00:08:50.215 20:36:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:50.790 20:36:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:50.790 20:36:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:50.790 20:36:19 json_config -- json_config/common.sh@41 -- # kill -0 1585270 00:08:50.790 20:36:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:50.790 20:36:19 json_config -- json_config/common.sh@43 -- # break 00:08:50.790 20:36:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:50.790 20:36:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:50.790 SPDK target shutdown done 00:08:50.790 20:36:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:50.790 INFO: relaunching applications... 00:08:50.790 20:36:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:50.790 20:36:19 json_config -- json_config/common.sh@9 -- # local app=target 00:08:50.790 20:36:19 json_config -- json_config/common.sh@10 -- # shift 00:08:50.790 20:36:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:50.790 20:36:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:50.790 20:36:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:50.790 20:36:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.790 20:36:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.790 20:36:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1587319 00:08:50.791 20:36:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:50.791 20:36:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:50.791 Waiting for target to run... 00:08:50.791 20:36:19 json_config -- json_config/common.sh@25 -- # waitforlisten 1587319 /var/tmp/spdk_tgt.sock 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 1587319 ']' 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:50.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.791 20:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.791 [2024-10-08 20:36:19.508636] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:50.791 [2024-10-08 20:36:19.508852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587319 ] 00:08:51.730 [2024-10-08 20:36:20.311788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.991 [2024-10-08 20:36:20.494728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.285 [2024-10-08 20:36:23.642354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.286 [2024-10-08 20:36:23.675110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:55.286 20:36:23 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.286 20:36:23 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:55.286 20:36:23 json_config -- json_config/common.sh@26 -- # echo '' 00:08:55.286 00:08:55.286 20:36:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:55.286 20:36:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:55.286 INFO: Checking if target configuration is the same... 00:08:55.286 20:36:23 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:55.286 20:36:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:55.286 20:36:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:55.286 + '[' 2 -ne 2 ']' 00:08:55.286 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:55.286 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:55.286 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:55.286 +++ basename /dev/fd/62 00:08:55.286 ++ mktemp /tmp/62.XXX 00:08:55.286 + tmp_file_1=/tmp/62.wht 00:08:55.286 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:55.286 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:55.286 + tmp_file_2=/tmp/spdk_tgt_config.json.zsk 00:08:55.286 + ret=0 00:08:55.286 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:55.545 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:55.805 + diff -u /tmp/62.wht /tmp/spdk_tgt_config.json.zsk 00:08:55.805 + echo 'INFO: JSON config files are the same' 00:08:55.805 INFO: JSON config files are the same 00:08:55.805 + rm /tmp/62.wht /tmp/spdk_tgt_config.json.zsk 00:08:55.805 + exit 0 00:08:55.805 20:36:24 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:55.805 20:36:24 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:55.805 INFO: changing configuration and checking if this can be detected... 00:08:55.805 20:36:24 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:55.805 20:36:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:56.374 20:36:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:56.374 20:36:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:56.374 20:36:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:56.374 + '[' 2 -ne 2 ']' 00:08:56.374 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:56.374 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:56.374 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:56.374 +++ basename /dev/fd/62 00:08:56.374 ++ mktemp /tmp/62.XXX 00:08:56.374 + tmp_file_1=/tmp/62.Uql 00:08:56.374 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:56.374 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:56.374 + tmp_file_2=/tmp/spdk_tgt_config.json.6BF 00:08:56.374 + ret=0 00:08:56.374 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:56.943 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:57.202 + diff -u /tmp/62.Uql /tmp/spdk_tgt_config.json.6BF 00:08:57.202 + ret=1 00:08:57.202 + echo '=== Start of file: /tmp/62.Uql ===' 00:08:57.202 + cat /tmp/62.Uql 00:08:57.202 + echo '=== End of file: /tmp/62.Uql ===' 00:08:57.202 + echo '' 00:08:57.202 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6BF ===' 00:08:57.202 + cat /tmp/spdk_tgt_config.json.6BF 00:08:57.202 + echo '=== End of file: /tmp/spdk_tgt_config.json.6BF ===' 00:08:57.202 + echo '' 00:08:57.202 + rm /tmp/62.Uql /tmp/spdk_tgt_config.json.6BF 00:08:57.202 + exit 1 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:57.202 INFO: configuration change detected. 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 1587319 ]] 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:57.202 20:36:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.202 20:36:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.203 20:36:25 json_config -- json_config/json_config.sh@330 -- # killprocess 1587319 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@950 -- # '[' -z 1587319 ']' 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@954 -- # kill -0 1587319 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@955 -- # uname 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587319 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587319' 00:08:57.203 killing process with pid 1587319 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@969 -- # kill 1587319 00:08:57.203 20:36:25 json_config -- common/autotest_common.sh@974 -- # wait 1587319 00:08:59.112 20:36:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.112 20:36:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:59.112 20:36:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.112 20:36:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.112 20:36:27 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:59.112 20:36:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:59.112 INFO: Success 00:08:59.112 00:08:59.112 real 0m21.153s 00:08:59.112 user 0m26.594s 00:08:59.112 sys 0m3.814s 00:08:59.112 20:36:27 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.112 20:36:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.112 ************************************ 00:08:59.112 END TEST json_config 00:08:59.112 ************************************ 00:08:59.112 20:36:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:59.112 20:36:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.112 20:36:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.112 20:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:59.112 ************************************ 00:08:59.112 START TEST json_config_extra_key 00:08:59.112 ************************************ 00:08:59.112 20:36:27 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:59.112 20:36:27 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:59.112 20:36:27 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:08:59.112 20:36:27 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:59.375 20:36:27 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.375 20:36:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:59.375 20:36:27 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.375 20:36:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:59.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.375 --rc genhtml_branch_coverage=1 00:08:59.375 --rc genhtml_function_coverage=1 00:08:59.375 --rc genhtml_legend=1 00:08:59.375 --rc geninfo_all_blocks=1 00:08:59.375 --rc geninfo_unexecuted_blocks=1 00:08:59.375 00:08:59.375 ' 00:08:59.375 20:36:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.376 --rc genhtml_branch_coverage=1 00:08:59.376 --rc genhtml_function_coverage=1 00:08:59.376 --rc genhtml_legend=1 00:08:59.376 --rc geninfo_all_blocks=1 00:08:59.376 --rc geninfo_unexecuted_blocks=1 00:08:59.376 00:08:59.376 ' 00:08:59.376 20:36:27 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.376 --rc genhtml_branch_coverage=1 00:08:59.376 --rc genhtml_function_coverage=1 00:08:59.376 --rc genhtml_legend=1 00:08:59.376 --rc geninfo_all_blocks=1 00:08:59.376 --rc geninfo_unexecuted_blocks=1 00:08:59.376 00:08:59.376 ' 00:08:59.376 20:36:27 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.376 --rc genhtml_branch_coverage=1 00:08:59.376 --rc genhtml_function_coverage=1 00:08:59.376 --rc genhtml_legend=1 00:08:59.376 --rc geninfo_all_blocks=1 00:08:59.376 --rc geninfo_unexecuted_blocks=1 00:08:59.376 00:08:59.376 ' 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.376 20:36:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.376 20:36:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.376 20:36:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.376 20:36:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.376 20:36:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.376 20:36:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.376 20:36:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.376 20:36:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:59.376 20:36:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.376 20:36:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:59.376 20:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:59.376 20:36:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:59.376 20:36:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:59.376 INFO: launching applications... 00:08:59.376 20:36:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1588423 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:59.376 Waiting for target to run... 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:59.376 20:36:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1588423 /var/tmp/spdk_tgt.sock 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1588423 ']' 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:59.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.376 20:36:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:59.376 [2024-10-08 20:36:28.136260] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:08:59.376 [2024-10-08 20:36:28.136451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588423 ] 00:09:00.315 [2024-10-08 20:36:28.924140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.574 [2024-10-08 20:36:29.115880] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.141 20:36:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.141 20:36:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:01.141 00:09:01.141 20:36:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:01.141 INFO: shutting down applications... 00:09:01.141 20:36:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1588423 ]] 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1588423 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1588423 00:09:01.141 20:36:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:01.707 20:36:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:01.707 20:36:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.707 20:36:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1588423 00:09:01.707 20:36:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1588423 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:02.276 20:36:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:02.276 SPDK target shutdown done 00:09:02.276 20:36:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:02.276 Success 00:09:02.276 00:09:02.276 real 0m3.106s 00:09:02.276 user 0m3.064s 00:09:02.276 sys 0m1.001s 00:09:02.276 20:36:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.276 20:36:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 ************************************ 00:09:02.276 END TEST json_config_extra_key 00:09:02.276 ************************************ 00:09:02.276 20:36:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.276 20:36:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.276 20:36:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.276 20:36:30 -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 ************************************ 00:09:02.276 START TEST alias_rpc 00:09:02.276 ************************************ 00:09:02.276 20:36:30 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.276 * Looking for test storage... 00:09:02.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.534 20:36:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 20:36:31 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.535 --rc genhtml_legend=1 00:09:02.535 --rc geninfo_all_blocks=1 00:09:02.535 --rc geninfo_unexecuted_blocks=1 00:09:02.535 00:09:02.535 ' 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.535 --rc genhtml_branch_coverage=1 00:09:02.535 --rc genhtml_function_coverage=1 00:09:02.535 --rc genhtml_legend=1 00:09:02.535 --rc geninfo_all_blocks=1 00:09:02.535 --rc geninfo_unexecuted_blocks=1 00:09:02.535 00:09:02.535 ' 00:09:02.535 20:36:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:02.535 20:36:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1588872 00:09:02.535 20:36:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:02.535 20:36:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1588872 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1588872 ']' 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.535 20:36:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.793 [2024-10-08 20:36:31.324148] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:02.793 [2024-10-08 20:36:31.324259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588872 ] 00:09:02.793 [2024-10-08 20:36:31.407521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.793 [2024-10-08 20:36:31.553389] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.363 20:36:31 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.363 20:36:31 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:03.363 20:36:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:03.933 20:36:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1588872 00:09:03.933 20:36:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1588872 ']' 00:09:03.933 20:36:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1588872 00:09:03.933 20:36:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:03.933 20:36:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.933 20:36:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588872 00:09:04.194 20:36:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.194 20:36:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.194 20:36:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588872' 00:09:04.194 killing process with pid 1588872 00:09:04.194 20:36:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 1588872 00:09:04.194 20:36:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 1588872 00:09:04.762 00:09:04.762 real 0m2.426s 00:09:04.762 user 0m2.803s 00:09:04.762 sys 0m0.722s 00:09:04.762 20:36:33 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.762 20:36:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.762 ************************************ 00:09:04.762 END TEST alias_rpc 00:09:04.762 ************************************ 00:09:04.762 20:36:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:04.762 20:36:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:04.762 20:36:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.762 20:36:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.762 20:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:04.762 ************************************ 00:09:04.762 START TEST spdkcli_tcp 00:09:04.762 ************************************ 00:09:04.762 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:04.762 * Looking for test storage... 00:09:04.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:04.762 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:04.762 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:04.762 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.023 20:36:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.023 --rc genhtml_branch_coverage=1 00:09:05.023 --rc genhtml_function_coverage=1 00:09:05.023 --rc genhtml_legend=1 00:09:05.023 --rc geninfo_all_blocks=1 00:09:05.023 --rc geninfo_unexecuted_blocks=1 00:09:05.023 00:09:05.023 ' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.023 --rc genhtml_branch_coverage=1 00:09:05.023 --rc genhtml_function_coverage=1 00:09:05.023 --rc genhtml_legend=1 00:09:05.023 --rc geninfo_all_blocks=1 00:09:05.023 --rc geninfo_unexecuted_blocks=1 00:09:05.023 00:09:05.023 ' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.023 --rc genhtml_branch_coverage=1 00:09:05.023 --rc genhtml_function_coverage=1 00:09:05.023 --rc genhtml_legend=1 00:09:05.023 --rc geninfo_all_blocks=1 00:09:05.023 --rc geninfo_unexecuted_blocks=1 00:09:05.023 00:09:05.023 ' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.023 --rc genhtml_branch_coverage=1 00:09:05.023 --rc genhtml_function_coverage=1 00:09:05.023 --rc genhtml_legend=1 00:09:05.023 --rc geninfo_all_blocks=1 00:09:05.023 --rc geninfo_unexecuted_blocks=1 00:09:05.023 00:09:05.023 ' 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1589204 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:05.023 20:36:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1589204 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1589204 ']' 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.023 20:36:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.023 [2024-10-08 20:36:33.745249] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:05.023 [2024-10-08 20:36:33.745419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589204 ] 00:09:05.283 [2024-10-08 20:36:33.883216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.548 [2024-10-08 20:36:34.113344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.548 [2024-10-08 20:36:34.113363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.117 20:36:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.118 20:36:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:06.118 20:36:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1589338 00:09:06.118 20:36:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:06.118 20:36:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:06.377 [ 00:09:06.377 "bdev_malloc_delete", 00:09:06.377 "bdev_malloc_create", 00:09:06.377 "bdev_null_resize", 00:09:06.377 "bdev_null_delete", 00:09:06.377 "bdev_null_create", 00:09:06.377 "bdev_nvme_cuse_unregister", 00:09:06.377 "bdev_nvme_cuse_register", 00:09:06.377 "bdev_opal_new_user", 00:09:06.377 "bdev_opal_set_lock_state", 00:09:06.377 "bdev_opal_delete", 00:09:06.377 "bdev_opal_get_info", 00:09:06.377 "bdev_opal_create", 00:09:06.377 "bdev_nvme_opal_revert", 00:09:06.377 "bdev_nvme_opal_init", 00:09:06.377 "bdev_nvme_send_cmd", 00:09:06.377 "bdev_nvme_set_keys", 00:09:06.377 "bdev_nvme_get_path_iostat", 00:09:06.377 "bdev_nvme_get_mdns_discovery_info", 00:09:06.377 "bdev_nvme_stop_mdns_discovery", 00:09:06.377 "bdev_nvme_start_mdns_discovery", 00:09:06.377 "bdev_nvme_set_multipath_policy", 00:09:06.377 "bdev_nvme_set_preferred_path", 00:09:06.377 "bdev_nvme_get_io_paths", 00:09:06.377 "bdev_nvme_remove_error_injection", 00:09:06.377 "bdev_nvme_add_error_injection", 00:09:06.377 "bdev_nvme_get_discovery_info", 00:09:06.377 "bdev_nvme_stop_discovery", 00:09:06.377 "bdev_nvme_start_discovery", 00:09:06.377 "bdev_nvme_get_controller_health_info", 00:09:06.377 "bdev_nvme_disable_controller", 00:09:06.377 "bdev_nvme_enable_controller", 00:09:06.377 "bdev_nvme_reset_controller", 00:09:06.377 "bdev_nvme_get_transport_statistics", 00:09:06.377 "bdev_nvme_apply_firmware", 00:09:06.377 "bdev_nvme_detach_controller", 00:09:06.377 "bdev_nvme_get_controllers", 00:09:06.377 "bdev_nvme_attach_controller", 00:09:06.377 "bdev_nvme_set_hotplug", 00:09:06.377 "bdev_nvme_set_options", 00:09:06.377 "bdev_passthru_delete", 00:09:06.377 "bdev_passthru_create", 00:09:06.377 "bdev_lvol_set_parent_bdev", 00:09:06.377 "bdev_lvol_set_parent", 00:09:06.377 "bdev_lvol_check_shallow_copy", 00:09:06.377 "bdev_lvol_start_shallow_copy", 00:09:06.377 "bdev_lvol_grow_lvstore", 00:09:06.377 "bdev_lvol_get_lvols", 00:09:06.377 "bdev_lvol_get_lvstores", 00:09:06.377 "bdev_lvol_delete", 00:09:06.377 "bdev_lvol_set_read_only", 00:09:06.377 "bdev_lvol_resize", 00:09:06.377 "bdev_lvol_decouple_parent", 00:09:06.377 "bdev_lvol_inflate", 00:09:06.377 "bdev_lvol_rename", 00:09:06.377 "bdev_lvol_clone_bdev", 00:09:06.377 "bdev_lvol_clone", 00:09:06.377 "bdev_lvol_snapshot", 00:09:06.377 "bdev_lvol_create", 00:09:06.377 "bdev_lvol_delete_lvstore", 00:09:06.377 "bdev_lvol_rename_lvstore", 00:09:06.377 "bdev_lvol_create_lvstore", 00:09:06.377 "bdev_raid_set_options", 00:09:06.377 "bdev_raid_remove_base_bdev", 00:09:06.377 "bdev_raid_add_base_bdev", 00:09:06.377 "bdev_raid_delete", 00:09:06.377 "bdev_raid_create", 00:09:06.377 "bdev_raid_get_bdevs", 00:09:06.377 "bdev_error_inject_error", 00:09:06.377 "bdev_error_delete", 00:09:06.377 "bdev_error_create", 00:09:06.377 "bdev_split_delete", 00:09:06.377 "bdev_split_create", 00:09:06.377 "bdev_delay_delete", 00:09:06.377 "bdev_delay_create", 00:09:06.377 "bdev_delay_update_latency", 00:09:06.377 "bdev_zone_block_delete", 00:09:06.377 "bdev_zone_block_create", 00:09:06.377 "blobfs_create", 00:09:06.377 "blobfs_detect", 00:09:06.377 "blobfs_set_cache_size", 00:09:06.377 "bdev_aio_delete", 00:09:06.377 "bdev_aio_rescan", 00:09:06.377 "bdev_aio_create", 00:09:06.377 "bdev_ftl_set_property", 00:09:06.377 "bdev_ftl_get_properties", 00:09:06.377 "bdev_ftl_get_stats", 00:09:06.377 "bdev_ftl_unmap", 00:09:06.377 "bdev_ftl_unload", 00:09:06.377 "bdev_ftl_delete", 00:09:06.377 "bdev_ftl_load", 00:09:06.377 "bdev_ftl_create", 00:09:06.377 "bdev_virtio_attach_controller", 00:09:06.377 "bdev_virtio_scsi_get_devices", 00:09:06.377 "bdev_virtio_detach_controller", 00:09:06.377 "bdev_virtio_blk_set_hotplug", 00:09:06.377 "bdev_iscsi_delete", 00:09:06.377 "bdev_iscsi_create", 00:09:06.377 "bdev_iscsi_set_options", 00:09:06.377 "accel_error_inject_error", 00:09:06.377 "ioat_scan_accel_module", 00:09:06.377 "dsa_scan_accel_module", 00:09:06.377 "iaa_scan_accel_module", 00:09:06.377 "vfu_virtio_create_fs_endpoint", 00:09:06.377 "vfu_virtio_create_scsi_endpoint", 00:09:06.377 "vfu_virtio_scsi_remove_target", 00:09:06.377 "vfu_virtio_scsi_add_target", 00:09:06.377 "vfu_virtio_create_blk_endpoint", 00:09:06.377 "vfu_virtio_delete_endpoint", 00:09:06.377 "keyring_file_remove_key", 00:09:06.377 "keyring_file_add_key", 00:09:06.377 "keyring_linux_set_options", 00:09:06.377 "fsdev_aio_delete", 00:09:06.377 "fsdev_aio_create", 00:09:06.377 "iscsi_get_histogram", 00:09:06.377 "iscsi_enable_histogram", 00:09:06.377 "iscsi_set_options", 00:09:06.377 "iscsi_get_auth_groups", 00:09:06.377 "iscsi_auth_group_remove_secret", 00:09:06.377 "iscsi_auth_group_add_secret", 00:09:06.377 "iscsi_delete_auth_group", 00:09:06.377 "iscsi_create_auth_group", 00:09:06.377 "iscsi_set_discovery_auth", 00:09:06.377 "iscsi_get_options", 00:09:06.377 "iscsi_target_node_request_logout", 00:09:06.377 "iscsi_target_node_set_redirect", 00:09:06.377 "iscsi_target_node_set_auth", 00:09:06.377 "iscsi_target_node_add_lun", 00:09:06.377 "iscsi_get_stats", 00:09:06.377 "iscsi_get_connections", 00:09:06.377 "iscsi_portal_group_set_auth", 00:09:06.377 "iscsi_start_portal_group", 00:09:06.377 "iscsi_delete_portal_group", 00:09:06.377 "iscsi_create_portal_group", 00:09:06.377 "iscsi_get_portal_groups", 00:09:06.377 "iscsi_delete_target_node", 00:09:06.377 "iscsi_target_node_remove_pg_ig_maps", 00:09:06.377 "iscsi_target_node_add_pg_ig_maps", 00:09:06.377 "iscsi_create_target_node", 00:09:06.377 "iscsi_get_target_nodes", 00:09:06.377 "iscsi_delete_initiator_group", 00:09:06.377 "iscsi_initiator_group_remove_initiators", 00:09:06.377 "iscsi_initiator_group_add_initiators", 00:09:06.377 "iscsi_create_initiator_group", 00:09:06.377 "iscsi_get_initiator_groups", 00:09:06.377 "nvmf_set_crdt", 00:09:06.377 "nvmf_set_config", 00:09:06.377 "nvmf_set_max_subsystems", 00:09:06.377 "nvmf_stop_mdns_prr", 00:09:06.377 "nvmf_publish_mdns_prr", 00:09:06.377 "nvmf_subsystem_get_listeners", 00:09:06.377 "nvmf_subsystem_get_qpairs", 00:09:06.377 "nvmf_subsystem_get_controllers", 00:09:06.377 "nvmf_get_stats", 00:09:06.377 "nvmf_get_transports", 00:09:06.377 "nvmf_create_transport", 00:09:06.377 "nvmf_get_targets", 00:09:06.377 "nvmf_delete_target", 00:09:06.377 "nvmf_create_target", 00:09:06.377 "nvmf_subsystem_allow_any_host", 00:09:06.377 "nvmf_subsystem_set_keys", 00:09:06.377 "nvmf_subsystem_remove_host", 00:09:06.377 "nvmf_subsystem_add_host", 00:09:06.378 "nvmf_ns_remove_host", 00:09:06.378 "nvmf_ns_add_host", 00:09:06.378 "nvmf_subsystem_remove_ns", 00:09:06.378 "nvmf_subsystem_set_ns_ana_group", 00:09:06.378 "nvmf_subsystem_add_ns", 00:09:06.378 "nvmf_subsystem_listener_set_ana_state", 00:09:06.378 "nvmf_discovery_get_referrals", 00:09:06.378 "nvmf_discovery_remove_referral", 00:09:06.378 "nvmf_discovery_add_referral", 00:09:06.378 "nvmf_subsystem_remove_listener", 00:09:06.378 "nvmf_subsystem_add_listener", 00:09:06.378 "nvmf_delete_subsystem", 00:09:06.378 "nvmf_create_subsystem", 00:09:06.378 "nvmf_get_subsystems", 00:09:06.378 "env_dpdk_get_mem_stats", 00:09:06.378 "nbd_get_disks", 00:09:06.378 "nbd_stop_disk", 00:09:06.378 "nbd_start_disk", 00:09:06.378 "ublk_recover_disk", 00:09:06.378 "ublk_get_disks", 00:09:06.378 "ublk_stop_disk", 00:09:06.378 "ublk_start_disk", 00:09:06.378 "ublk_destroy_target", 00:09:06.378 "ublk_create_target", 00:09:06.378 "virtio_blk_create_transport", 00:09:06.378 "virtio_blk_get_transports", 00:09:06.378 "vhost_controller_set_coalescing", 00:09:06.378 "vhost_get_controllers", 00:09:06.378 "vhost_delete_controller", 00:09:06.378 "vhost_create_blk_controller", 00:09:06.378 "vhost_scsi_controller_remove_target", 00:09:06.378 "vhost_scsi_controller_add_target", 00:09:06.378 "vhost_start_scsi_controller", 00:09:06.378 "vhost_create_scsi_controller", 00:09:06.378 "thread_set_cpumask", 00:09:06.378 "scheduler_set_options", 00:09:06.378 "framework_get_governor", 00:09:06.378 "framework_get_scheduler", 00:09:06.378 "framework_set_scheduler", 00:09:06.378 "framework_get_reactors", 00:09:06.378 "thread_get_io_channels", 00:09:06.378 "thread_get_pollers", 00:09:06.378 "thread_get_stats", 00:09:06.378 "framework_monitor_context_switch", 00:09:06.378 "spdk_kill_instance", 00:09:06.378 "log_enable_timestamps", 00:09:06.378 "log_get_flags", 00:09:06.378 "log_clear_flag", 00:09:06.378 "log_set_flag", 00:09:06.378 "log_get_level", 00:09:06.378 "log_set_level", 00:09:06.378 "log_get_print_level", 00:09:06.378 "log_set_print_level", 00:09:06.378 "framework_enable_cpumask_locks", 00:09:06.378 "framework_disable_cpumask_locks", 00:09:06.378 "framework_wait_init", 00:09:06.378 "framework_start_init", 00:09:06.378 "scsi_get_devices", 00:09:06.378 "bdev_get_histogram", 00:09:06.378 "bdev_enable_histogram", 00:09:06.378 "bdev_set_qos_limit", 00:09:06.378 "bdev_set_qd_sampling_period", 00:09:06.378 "bdev_get_bdevs", 00:09:06.378 "bdev_reset_iostat", 00:09:06.378 "bdev_get_iostat", 00:09:06.378 "bdev_examine", 00:09:06.378 "bdev_wait_for_examine", 00:09:06.378 "bdev_set_options", 00:09:06.378 "accel_get_stats", 00:09:06.378 "accel_set_options", 00:09:06.378 "accel_set_driver", 00:09:06.378 "accel_crypto_key_destroy", 00:09:06.378 "accel_crypto_keys_get", 00:09:06.378 "accel_crypto_key_create", 00:09:06.378 "accel_assign_opc", 00:09:06.378 "accel_get_module_info", 00:09:06.378 "accel_get_opc_assignments", 00:09:06.378 "vmd_rescan", 00:09:06.378 "vmd_remove_device", 00:09:06.378 "vmd_enable", 00:09:06.378 "sock_get_default_impl", 00:09:06.378 "sock_set_default_impl", 00:09:06.378 "sock_impl_set_options", 00:09:06.378 "sock_impl_get_options", 00:09:06.378 "iobuf_get_stats", 00:09:06.378 "iobuf_set_options", 00:09:06.378 "keyring_get_keys", 00:09:06.378 "vfu_tgt_set_base_path", 00:09:06.378 "framework_get_pci_devices", 00:09:06.378 "framework_get_config", 00:09:06.378 "framework_get_subsystems", 00:09:06.378 "fsdev_set_opts", 00:09:06.378 "fsdev_get_opts", 00:09:06.378 "trace_get_info", 00:09:06.378 "trace_get_tpoint_group_mask", 00:09:06.378 "trace_disable_tpoint_group", 00:09:06.378 "trace_enable_tpoint_group", 00:09:06.378 "trace_clear_tpoint_mask", 00:09:06.378 "trace_set_tpoint_mask", 00:09:06.378 "notify_get_notifications", 00:09:06.378 "notify_get_types", 00:09:06.378 "spdk_get_version", 00:09:06.378 "rpc_get_methods" 00:09:06.378 ] 00:09:06.378 20:36:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:06.378 20:36:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.378 20:36:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.378 20:36:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:06.378 20:36:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1589204 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1589204 ']' 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1589204 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589204 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589204' 00:09:06.378 killing process with pid 1589204 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1589204 00:09:06.378 20:36:35 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1589204 00:09:06.944 00:09:06.944 real 0m2.260s 00:09:06.944 user 0m3.944s 00:09:06.944 sys 0m0.770s 00:09:06.944 20:36:35 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.944 20:36:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.944 ************************************ 00:09:06.944 END TEST spdkcli_tcp 00:09:06.944 ************************************ 00:09:07.205 20:36:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:07.205 20:36:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.205 20:36:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.205 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:07.205 ************************************ 00:09:07.205 START TEST dpdk_mem_utility 00:09:07.205 ************************************ 00:09:07.205 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:07.205 * Looking for test storage... 00:09:07.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:07.205 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:07.205 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:09:07.205 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:07.205 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.205 20:36:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.206 20:36:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:07.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.206 --rc genhtml_branch_coverage=1 00:09:07.206 --rc genhtml_function_coverage=1 00:09:07.206 --rc genhtml_legend=1 00:09:07.206 --rc geninfo_all_blocks=1 00:09:07.206 --rc geninfo_unexecuted_blocks=1 00:09:07.206 00:09:07.206 ' 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:07.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.206 --rc genhtml_branch_coverage=1 00:09:07.206 --rc genhtml_function_coverage=1 00:09:07.206 --rc genhtml_legend=1 00:09:07.206 --rc geninfo_all_blocks=1 00:09:07.206 --rc geninfo_unexecuted_blocks=1 00:09:07.206 00:09:07.206 ' 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:07.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.206 --rc genhtml_branch_coverage=1 00:09:07.206 --rc genhtml_function_coverage=1 00:09:07.206 --rc genhtml_legend=1 00:09:07.206 --rc geninfo_all_blocks=1 00:09:07.206 --rc geninfo_unexecuted_blocks=1 00:09:07.206 00:09:07.206 ' 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:07.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.206 --rc genhtml_branch_coverage=1 00:09:07.206 --rc genhtml_function_coverage=1 00:09:07.206 --rc genhtml_legend=1 00:09:07.206 --rc geninfo_all_blocks=1 00:09:07.206 --rc geninfo_unexecuted_blocks=1 00:09:07.206 00:09:07.206 ' 00:09:07.206 20:36:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:07.206 20:36:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1589545 00:09:07.206 20:36:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.206 20:36:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1589545 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1589545 ']' 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.206 20:36:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:07.467 [2024-10-08 20:36:36.069790] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:07.467 [2024-10-08 20:36:36.069977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589545 ] 00:09:07.467 [2024-10-08 20:36:36.209689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.728 [2024-10-08 20:36:36.428417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.298 20:36:36 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.298 20:36:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:08.298 20:36:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:08.298 20:36:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:08.298 20:36:36 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.298 20:36:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:08.298 { 00:09:08.298 "filename": "/tmp/spdk_mem_dump.txt" 00:09:08.298 } 00:09:08.298 20:36:36 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.298 20:36:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:08.298 DPDK memory size 860.000000 MiB in 1 heap(s) 00:09:08.298 1 heaps totaling size 860.000000 MiB 00:09:08.298 size: 860.000000 MiB heap id: 0 00:09:08.298 end heaps---------- 00:09:08.298 9 mempools totaling size 642.649841 MiB 00:09:08.298 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:08.298 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:08.298 size: 92.545471 MiB name: bdev_io_1589545 00:09:08.298 size: 51.011292 MiB name: evtpool_1589545 00:09:08.298 size: 50.003479 MiB name: msgpool_1589545 00:09:08.298 size: 36.509338 MiB name: fsdev_io_1589545 00:09:08.298 size: 21.763794 MiB name: PDU_Pool 00:09:08.298 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:08.298 size: 0.026123 MiB name: Session_Pool 00:09:08.298 end mempools------- 00:09:08.298 6 memzones totaling size 4.142822 MiB 00:09:08.298 size: 1.000366 MiB name: RG_ring_0_1589545 00:09:08.298 size: 1.000366 MiB name: RG_ring_1_1589545 00:09:08.298 size: 1.000366 MiB name: RG_ring_4_1589545 00:09:08.298 size: 1.000366 MiB name: RG_ring_5_1589545 00:09:08.298 size: 0.125366 MiB name: RG_ring_2_1589545 00:09:08.298 size: 0.015991 MiB name: RG_ring_3_1589545 00:09:08.298 end memzones------- 00:09:08.298 20:36:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:08.557 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:09:08.557 list of free elements. size: 13.984680 MiB 00:09:08.557 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:08.557 element at address: 0x200000800000 with size: 1.996948 MiB 00:09:08.557 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:09:08.557 element at address: 0x20001be00000 with size: 0.999878 MiB 00:09:08.557 element at address: 0x200034a00000 with size: 0.994446 MiB 00:09:08.557 element at address: 0x200009600000 with size: 0.959839 MiB 00:09:08.557 element at address: 0x200015e00000 with size: 0.954285 MiB 00:09:08.557 element at address: 0x20001c000000 with size: 0.936584 MiB 00:09:08.557 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:08.557 element at address: 0x20001d800000 with size: 0.582886 MiB 00:09:08.557 element at address: 0x200003e00000 with size: 0.495422 MiB 00:09:08.557 element at address: 0x20000d800000 with size: 0.490723 MiB 00:09:08.557 element at address: 0x20001c200000 with size: 0.485657 MiB 00:09:08.557 element at address: 0x200007000000 with size: 0.481934 MiB 00:09:08.557 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:09:08.557 element at address: 0x200003a00000 with size: 0.355042 MiB 00:09:08.557 list of standard malloc elements. size: 199.218628 MiB 00:09:08.557 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:09:08.557 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:09:08.557 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:09:08.557 element at address: 0x20001befff80 with size: 1.000122 MiB 00:09:08.557 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:09:08.557 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:08.557 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:09:08.557 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:08.557 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:09:08.557 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003aff940 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003eff000 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20000707b600 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:09:08.558 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:09:08.558 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20001d895380 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20001d895440 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:09:08.558 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:09:08.558 list of memzone associated elements. size: 646.796692 MiB 00:09:08.558 element at address: 0x20001d895500 with size: 211.416748 MiB 00:09:08.558 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:08.558 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:09:08.558 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:08.558 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:09:08.558 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1589545_0 00:09:08.558 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:08.558 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1589545_0 00:09:08.558 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:08.558 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1589545_0 00:09:08.558 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:09:08.558 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1589545_0 00:09:08.558 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:09:08.558 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:08.558 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:09:08.558 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:08.558 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:08.558 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1589545 00:09:08.558 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:08.558 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1589545 00:09:08.558 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:08.558 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1589545 00:09:08.558 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:09:08.558 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:08.558 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:09:08.558 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:08.558 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:09:08.558 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:08.558 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:09:08.558 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:08.558 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:08.558 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1589545 00:09:08.558 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:08.558 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1589545 00:09:08.558 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:09:08.558 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1589545 00:09:08.558 element at address: 0x200034afe940 with size: 1.000488 MiB 00:09:08.558 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1589545 00:09:08.558 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:09:08.558 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1589545 00:09:08.558 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:09:08.558 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1589545 00:09:08.558 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:09:08.558 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:08.558 element at address: 0x20000707b780 with size: 0.500488 MiB 00:09:08.558 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:08.558 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:09:08.558 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:08.558 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:09:08.558 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1589545 00:09:08.558 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:09:08.558 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:08.558 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:09:08.558 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:08.558 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:09:08.558 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1589545 00:09:08.558 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:09:08.558 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:08.558 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:08.558 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1589545 00:09:08.558 element at address: 0x200003affa00 with size: 0.000305 MiB 00:09:08.558 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1589545 00:09:08.558 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:09:08.558 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1589545 00:09:08.558 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:09:08.558 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:08.558 20:36:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:08.558 20:36:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1589545 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1589545 ']' 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1589545 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589545 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589545' 00:09:08.558 killing process with pid 1589545 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1589545 00:09:08.558 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1589545 00:09:09.495 00:09:09.495 real 0m2.122s 00:09:09.495 user 0m2.275s 00:09:09.495 sys 0m0.806s 00:09:09.495 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.495 20:36:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:09.495 ************************************ 00:09:09.495 END TEST dpdk_mem_utility 00:09:09.495 ************************************ 00:09:09.495 20:36:37 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:09.495 20:36:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.495 20:36:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.495 20:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:09.495 ************************************ 00:09:09.495 START TEST event 00:09:09.495 ************************************ 00:09:09.495 20:36:37 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:09.495 * Looking for test storage... 00:09:09.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.496 20:36:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.496 20:36:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.496 20:36:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.496 20:36:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.496 20:36:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.496 20:36:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.496 20:36:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.496 20:36:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.496 20:36:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.496 20:36:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.496 20:36:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.496 20:36:38 event -- scripts/common.sh@344 -- # case "$op" in 00:09:09.496 20:36:38 event -- scripts/common.sh@345 -- # : 1 00:09:09.496 20:36:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.496 20:36:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.496 20:36:38 event -- scripts/common.sh@365 -- # decimal 1 00:09:09.496 20:36:38 event -- scripts/common.sh@353 -- # local d=1 00:09:09.496 20:36:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.496 20:36:38 event -- scripts/common.sh@355 -- # echo 1 00:09:09.496 20:36:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.496 20:36:38 event -- scripts/common.sh@366 -- # decimal 2 00:09:09.496 20:36:38 event -- scripts/common.sh@353 -- # local d=2 00:09:09.496 20:36:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.496 20:36:38 event -- scripts/common.sh@355 -- # echo 2 00:09:09.496 20:36:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.496 20:36:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.496 20:36:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.496 20:36:38 event -- scripts/common.sh@368 -- # return 0 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.496 --rc genhtml_branch_coverage=1 00:09:09.496 --rc genhtml_function_coverage=1 00:09:09.496 --rc genhtml_legend=1 00:09:09.496 --rc geninfo_all_blocks=1 00:09:09.496 --rc geninfo_unexecuted_blocks=1 00:09:09.496 00:09:09.496 ' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.496 --rc genhtml_branch_coverage=1 00:09:09.496 --rc genhtml_function_coverage=1 00:09:09.496 --rc genhtml_legend=1 00:09:09.496 --rc geninfo_all_blocks=1 00:09:09.496 --rc geninfo_unexecuted_blocks=1 00:09:09.496 00:09:09.496 ' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.496 --rc genhtml_branch_coverage=1 00:09:09.496 --rc genhtml_function_coverage=1 00:09:09.496 --rc genhtml_legend=1 00:09:09.496 --rc geninfo_all_blocks=1 00:09:09.496 --rc geninfo_unexecuted_blocks=1 00:09:09.496 00:09:09.496 ' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.496 --rc genhtml_branch_coverage=1 00:09:09.496 --rc genhtml_function_coverage=1 00:09:09.496 --rc genhtml_legend=1 00:09:09.496 --rc geninfo_all_blocks=1 00:09:09.496 --rc geninfo_unexecuted_blocks=1 00:09:09.496 00:09:09.496 ' 00:09:09.496 20:36:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:09.496 20:36:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:09.496 20:36:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:09.496 20:36:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.496 20:36:38 event -- common/autotest_common.sh@10 -- # set +x 00:09:09.496 ************************************ 00:09:09.496 START TEST event_perf 00:09:09.496 ************************************ 00:09:09.496 20:36:38 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:09.496 Running I/O for 1 seconds...[2024-10-08 20:36:38.183464] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:09.496 [2024-10-08 20:36:38.183541] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589871 ] 00:09:09.758 [2024-10-08 20:36:38.295910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.758 [2024-10-08 20:36:38.518044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.758 [2024-10-08 20:36:38.518141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.758 [2024-10-08 20:36:38.518234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.758 [2024-10-08 20:36:38.518242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.134 Running I/O for 1 seconds... 00:09:11.134 lcore 0: 210529 00:09:11.134 lcore 1: 210529 00:09:11.134 lcore 2: 210530 00:09:11.134 lcore 3: 210529 00:09:11.134 done. 00:09:11.134 00:09:11.134 real 0m1.560s 00:09:11.134 user 0m4.396s 00:09:11.134 sys 0m0.155s 00:09:11.134 20:36:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.134 20:36:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:11.134 ************************************ 00:09:11.134 END TEST event_perf 00:09:11.134 ************************************ 00:09:11.134 20:36:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:11.134 20:36:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:11.134 20:36:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.134 20:36:39 event -- common/autotest_common.sh@10 -- # set +x 00:09:11.134 ************************************ 00:09:11.134 START TEST event_reactor 00:09:11.134 ************************************ 00:09:11.134 20:36:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:11.134 [2024-10-08 20:36:39.798163] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:11.134 [2024-10-08 20:36:39.798231] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590036 ] 00:09:11.394 [2024-10-08 20:36:39.902739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.394 [2024-10-08 20:36:40.090881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.772 test_start 00:09:12.772 oneshot 00:09:12.772 tick 100 00:09:12.773 tick 100 00:09:12.773 tick 250 00:09:12.773 tick 100 00:09:12.773 tick 100 00:09:12.773 tick 100 00:09:12.773 tick 250 00:09:12.773 tick 500 00:09:12.773 tick 100 00:09:12.773 tick 100 00:09:12.773 tick 250 00:09:12.773 tick 100 00:09:12.773 tick 100 00:09:12.773 test_end 00:09:12.773 00:09:12.773 real 0m1.522s 00:09:12.773 user 0m1.402s 00:09:12.773 sys 0m0.110s 00:09:12.773 20:36:41 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.773 20:36:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:12.773 ************************************ 00:09:12.773 END TEST event_reactor 00:09:12.773 ************************************ 00:09:12.773 20:36:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:12.773 20:36:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:12.773 20:36:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.773 20:36:41 event -- common/autotest_common.sh@10 -- # set +x 00:09:12.773 ************************************ 00:09:12.773 START TEST event_reactor_perf 00:09:12.773 ************************************ 00:09:12.773 20:36:41 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:12.773 [2024-10-08 20:36:41.401742] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:12.773 [2024-10-08 20:36:41.401885] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590192 ] 00:09:13.033 [2024-10-08 20:36:41.539185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.033 [2024-10-08 20:36:41.739465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.415 test_start 00:09:14.415 test_end 00:09:14.415 Performance: 166902 events per second 00:09:14.415 00:09:14.415 real 0m1.558s 00:09:14.415 user 0m1.400s 00:09:14.415 sys 0m0.145s 00:09:14.415 20:36:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.415 20:36:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:14.415 ************************************ 00:09:14.415 END TEST event_reactor_perf 00:09:14.415 ************************************ 00:09:14.415 20:36:42 event -- event/event.sh@49 -- # uname -s 00:09:14.415 20:36:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:14.415 20:36:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:14.415 20:36:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.415 20:36:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.415 20:36:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.415 ************************************ 00:09:14.415 START TEST event_scheduler 00:09:14.415 ************************************ 00:09:14.415 20:36:43 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:14.415 * Looking for test storage... 00:09:14.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:14.415 20:36:43 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:14.415 20:36:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:09:14.415 20:36:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:14.676 20:36:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.676 20:36:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.677 20:36:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:14.677 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1590507 00:09:14.677 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:14.677 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.677 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1590507 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1590507 ']' 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.677 20:36:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:14.677 [2024-10-08 20:36:43.350515] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:14.677 [2024-10-08 20:36:43.350633] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590507 ] 00:09:14.965 [2024-10-08 20:36:43.471183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.965 [2024-10-08 20:36:43.693616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.965 [2024-10-08 20:36:43.693711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.965 [2024-10-08 20:36:43.693781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.965 [2024-10-08 20:36:43.697670] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:15.225 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 [2024-10-08 20:36:43.886788] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:15.225 [2024-10-08 20:36:43.886822] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:15.225 [2024-10-08 20:36:43.886843] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:15.225 [2024-10-08 20:36:43.886856] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:15.225 [2024-10-08 20:36:43.886868] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 20:36:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 20:36:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 [2024-10-08 20:36:44.064097] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:15.486 20:36:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:15.486 20:36:44 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.486 20:36:44 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 ************************************ 00:09:15.486 START TEST scheduler_create_thread 00:09:15.486 ************************************ 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 2 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 3 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 4 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 5 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 6 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 7 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 8 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.486 9 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:15.486 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.487 10 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.487 20:36:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.401 20:36:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.401 20:36:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:17.401 20:36:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:17.401 20:36:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.401 20:36:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 20:36:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.996 00:09:17.996 real 0m2.620s 00:09:17.996 user 0m0.013s 00:09:17.996 sys 0m0.006s 00:09:17.996 20:36:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.996 20:36:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 ************************************ 00:09:17.996 END TEST scheduler_create_thread 00:09:17.996 ************************************ 00:09:17.996 20:36:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:17.996 20:36:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1590507 00:09:17.996 20:36:46 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1590507 ']' 00:09:17.996 20:36:46 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1590507 00:09:17.996 20:36:46 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:17.996 20:36:46 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.996 20:36:46 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1590507 00:09:18.256 20:36:46 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:18.256 20:36:46 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:18.256 20:36:46 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1590507' 00:09:18.256 killing process with pid 1590507 00:09:18.256 20:36:46 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1590507 00:09:18.256 20:36:46 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1590507 00:09:18.516 [2024-10-08 20:36:47.189382] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:19.085 00:09:19.085 real 0m4.583s 00:09:19.085 user 0m7.036s 00:09:19.085 sys 0m0.565s 00:09:19.085 20:36:47 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.085 20:36:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:19.085 ************************************ 00:09:19.085 END TEST event_scheduler 00:09:19.085 ************************************ 00:09:19.085 20:36:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:19.085 20:36:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:19.085 20:36:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.085 20:36:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.085 20:36:47 event -- common/autotest_common.sh@10 -- # set +x 00:09:19.085 ************************************ 00:09:19.085 START TEST app_repeat 00:09:19.085 ************************************ 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1591083 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1591083' 00:09:19.085 Process app_repeat pid: 1591083 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:19.085 spdk_app_start Round 0 00:09:19.085 20:36:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1591083 /var/tmp/spdk-nbd.sock 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1591083 ']' 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.085 20:36:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:19.085 [2024-10-08 20:36:47.684101] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:19.085 [2024-10-08 20:36:47.684170] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591083 ] 00:09:19.085 [2024-10-08 20:36:47.785609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.346 [2024-10-08 20:36:48.007554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.346 [2024-10-08 20:36:48.007568] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.285 20:36:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.285 20:36:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:20.285 20:36:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.227 Malloc0 00:09:21.227 20:36:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.487 Malloc1 00:09:21.487 20:36:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.487 20:36:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.060 /dev/nbd0 00:09:22.060 20:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.060 20:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.060 1+0 records in 00:09:22.060 1+0 records out 00:09:22.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194858 s, 21.0 MB/s 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:22.060 20:36:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:22.060 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.060 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.060 20:36:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:22.321 /dev/nbd1 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.321 1+0 records in 00:09:22.321 1+0 records out 00:09:22.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389714 s, 10.5 MB/s 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:22.321 20:36:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.321 20:36:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.891 20:36:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:22.891 { 00:09:22.891 "nbd_device": "/dev/nbd0", 00:09:22.891 "bdev_name": "Malloc0" 00:09:22.891 }, 00:09:22.891 { 00:09:22.891 "nbd_device": "/dev/nbd1", 00:09:22.891 "bdev_name": "Malloc1" 00:09:22.891 } 00:09:22.891 ]' 00:09:22.891 20:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:22.891 { 00:09:22.891 "nbd_device": "/dev/nbd0", 00:09:22.892 "bdev_name": "Malloc0" 00:09:22.892 }, 00:09:22.892 { 00:09:22.892 "nbd_device": "/dev/nbd1", 00:09:22.892 "bdev_name": "Malloc1" 00:09:22.892 } 00:09:22.892 ]' 00:09:22.892 20:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.154 20:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.154 /dev/nbd1' 00:09:23.154 20:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.154 /dev/nbd1' 00:09:23.154 20:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.154 20:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.155 256+0 records in 00:09:23.155 256+0 records out 00:09:23.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830796 s, 126 MB/s 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.155 256+0 records in 00:09:23.155 256+0 records out 00:09:23.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0481839 s, 21.8 MB/s 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.155 256+0 records in 00:09:23.155 256+0 records out 00:09:23.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313248 s, 33.5 MB/s 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.155 20:36:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.727 20:36:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.369 20:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:24.629 20:36:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:24.629 20:36:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:25.199 20:36:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:25.769 [2024-10-08 20:36:54.289252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.769 [2024-10-08 20:36:54.505951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.769 [2024-10-08 20:36:54.505954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.030 [2024-10-08 20:36:54.607730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:26.030 [2024-10-08 20:36:54.607874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:28.573 20:36:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:28.573 20:36:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:28.573 spdk_app_start Round 1 00:09:28.573 20:36:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1591083 /var/tmp/spdk-nbd.sock 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1591083 ']' 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.573 20:36:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:28.573 20:36:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.573 20:36:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:28.573 20:36:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:29.144 Malloc0 00:09:29.144 20:36:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.088 Malloc1 00:09:30.088 20:36:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.088 20:36:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:30.348 /dev/nbd0 00:09:30.609 20:36:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.609 20:36:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:30.609 1+0 records in 00:09:30.609 1+0 records out 00:09:30.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269767 s, 15.2 MB/s 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:30.609 20:36:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:30.609 20:36:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.609 20:36:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.609 20:36:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:31.179 /dev/nbd1 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.179 1+0 records in 00:09:31.179 1+0 records out 00:09:31.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426739 s, 9.6 MB/s 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:31.179 20:36:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.179 20:36:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.750 { 00:09:31.750 "nbd_device": "/dev/nbd0", 00:09:31.750 "bdev_name": "Malloc0" 00:09:31.750 }, 00:09:31.750 { 00:09:31.750 "nbd_device": "/dev/nbd1", 00:09:31.750 "bdev_name": "Malloc1" 00:09:31.750 } 00:09:31.750 ]' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.750 { 00:09:31.750 "nbd_device": "/dev/nbd0", 00:09:31.750 "bdev_name": "Malloc0" 00:09:31.750 }, 00:09:31.750 { 00:09:31.750 "nbd_device": "/dev/nbd1", 00:09:31.750 "bdev_name": "Malloc1" 00:09:31.750 } 00:09:31.750 ]' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:31.750 /dev/nbd1' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:31.750 /dev/nbd1' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:31.750 256+0 records in 00:09:31.750 256+0 records out 00:09:31.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00783587 s, 134 MB/s 00:09:31.750 20:37:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.751 20:37:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.751 256+0 records in 00:09:31.751 256+0 records out 00:09:31.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0450533 s, 23.3 MB/s 00:09:31.751 20:37:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.751 20:37:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.011 256+0 records in 00:09:32.011 256+0 records out 00:09:32.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435892 s, 24.1 MB/s 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.011 20:37:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.581 20:37:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.151 20:37:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.152 20:37:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.411 20:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.411 20:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.411 20:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.671 20:37:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.671 20:37:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:33.929 20:37:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:34.500 [2024-10-08 20:37:02.982318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.500 [2024-10-08 20:37:03.202629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.500 [2024-10-08 20:37:03.202645] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.761 [2024-10-08 20:37:03.309578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:34.761 [2024-10-08 20:37:03.309725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.303 20:37:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:37.303 20:37:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:37.303 spdk_app_start Round 2 00:09:37.303 20:37:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1591083 /var/tmp/spdk-nbd.sock 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1591083 ']' 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.303 20:37:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:37.303 20:37:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.873 Malloc0 00:09:37.873 20:37:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.443 Malloc1 00:09:38.443 20:37:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.443 20:37:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:39.014 /dev/nbd0 00:09:39.014 20:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:39.014 20:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.014 1+0 records in 00:09:39.014 1+0 records out 00:09:39.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231471 s, 17.7 MB/s 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:39.014 20:37:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:39.014 20:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.014 20:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.014 20:37:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:39.584 /dev/nbd1 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.584 1+0 records in 00:09:39.584 1+0 records out 00:09:39.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298134 s, 13.7 MB/s 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:39.584 20:37:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.584 20:37:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.152 20:37:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.152 { 00:09:40.152 "nbd_device": "/dev/nbd0", 00:09:40.152 "bdev_name": "Malloc0" 00:09:40.152 }, 00:09:40.152 { 00:09:40.152 "nbd_device": "/dev/nbd1", 00:09:40.152 "bdev_name": "Malloc1" 00:09:40.152 } 00:09:40.152 ]' 00:09:40.152 20:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.152 { 00:09:40.152 "nbd_device": "/dev/nbd0", 00:09:40.152 "bdev_name": "Malloc0" 00:09:40.152 }, 00:09:40.152 { 00:09:40.152 "nbd_device": "/dev/nbd1", 00:09:40.152 "bdev_name": "Malloc1" 00:09:40.152 } 00:09:40.152 ]' 00:09:40.152 20:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:40.412 /dev/nbd1' 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:40.412 /dev/nbd1' 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:40.412 256+0 records in 00:09:40.412 256+0 records out 00:09:40.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00867215 s, 121 MB/s 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.412 20:37:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:40.412 256+0 records in 00:09:40.412 256+0 records out 00:09:40.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0485914 s, 21.6 MB/s 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:40.412 256+0 records in 00:09:40.412 256+0 records out 00:09:40.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278363 s, 37.7 MB/s 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:40.412 20:37:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.413 20:37:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.983 20:37:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.553 20:37:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.505 20:37:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.505 20:37:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.505 20:37:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.506 20:37:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.506 20:37:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.506 20:37:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.506 20:37:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.764 20:37:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:43.334 [2024-10-08 20:37:11.824530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.334 [2024-10-08 20:37:12.046110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.334 [2024-10-08 20:37:12.046115] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.594 [2024-10-08 20:37:12.153433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:43.594 [2024-10-08 20:37:12.153552] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:46.168 20:37:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1591083 /var/tmp/spdk-nbd.sock 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1591083 ']' 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.168 20:37:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:46.427 20:37:15 event.app_repeat -- event/event.sh@39 -- # killprocess 1591083 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1591083 ']' 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1591083 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1591083 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1591083' 00:09:46.427 killing process with pid 1591083 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1591083 00:09:46.427 20:37:15 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1591083 00:09:46.995 spdk_app_start is called in Round 0. 00:09:46.995 Shutdown signal received, stop current app iteration 00:09:46.995 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:09:46.995 spdk_app_start is called in Round 1. 00:09:46.995 Shutdown signal received, stop current app iteration 00:09:46.995 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:09:46.995 spdk_app_start is called in Round 2. 00:09:46.995 Shutdown signal received, stop current app iteration 00:09:46.995 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:09:46.995 spdk_app_start is called in Round 3. 00:09:46.995 Shutdown signal received, stop current app iteration 00:09:46.995 20:37:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:46.995 20:37:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:46.995 00:09:46.995 real 0m27.835s 00:09:46.995 user 1m3.190s 00:09:46.995 sys 0m6.105s 00:09:46.995 20:37:15 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.995 20:37:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.995 ************************************ 00:09:46.995 END TEST app_repeat 00:09:46.995 ************************************ 00:09:46.995 20:37:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:46.995 20:37:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:46.995 20:37:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.995 20:37:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.995 20:37:15 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.995 ************************************ 00:09:46.995 START TEST cpu_locks 00:09:46.995 ************************************ 00:09:46.995 20:37:15 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:46.995 * Looking for test storage... 00:09:46.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:46.995 20:37:15 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.995 20:37:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.995 20:37:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.256 20:37:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.256 20:37:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:47.256 20:37:15 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.256 20:37:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.257 --rc genhtml_branch_coverage=1 00:09:47.257 --rc genhtml_function_coverage=1 00:09:47.257 --rc genhtml_legend=1 00:09:47.257 --rc geninfo_all_blocks=1 00:09:47.257 --rc geninfo_unexecuted_blocks=1 00:09:47.257 00:09:47.257 ' 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.257 --rc genhtml_branch_coverage=1 00:09:47.257 --rc genhtml_function_coverage=1 00:09:47.257 --rc genhtml_legend=1 00:09:47.257 --rc geninfo_all_blocks=1 00:09:47.257 --rc geninfo_unexecuted_blocks=1 00:09:47.257 00:09:47.257 ' 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.257 --rc genhtml_branch_coverage=1 00:09:47.257 --rc genhtml_function_coverage=1 00:09:47.257 --rc genhtml_legend=1 00:09:47.257 --rc geninfo_all_blocks=1 00:09:47.257 --rc geninfo_unexecuted_blocks=1 00:09:47.257 00:09:47.257 ' 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.257 --rc genhtml_branch_coverage=1 00:09:47.257 --rc genhtml_function_coverage=1 00:09:47.257 --rc genhtml_legend=1 00:09:47.257 --rc geninfo_all_blocks=1 00:09:47.257 --rc geninfo_unexecuted_blocks=1 00:09:47.257 00:09:47.257 ' 00:09:47.257 20:37:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:47.257 20:37:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:47.257 20:37:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:47.257 20:37:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.257 20:37:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.257 ************************************ 00:09:47.257 START TEST default_locks 00:09:47.257 ************************************ 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1594502 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1594502 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1594502 ']' 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.257 20:37:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.257 [2024-10-08 20:37:15.980133] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:47.257 [2024-10-08 20:37:15.980221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594502 ] 00:09:47.517 [2024-10-08 20:37:16.118295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.777 [2024-10-08 20:37:16.345190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.719 20:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.719 20:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:48.719 20:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1594502 00:09:48.719 20:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1594502 00:09:48.719 20:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.697 lslocks: write error 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1594502 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1594502 ']' 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1594502 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594502 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594502' 00:09:49.697 killing process with pid 1594502 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1594502 00:09:49.697 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1594502 00:09:50.265 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1594502 00:09:50.265 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:50.265 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1594502 00:09:50.265 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1594502 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1594502 ']' 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1594502) - No such process 00:09:50.266 ERROR: process (pid: 1594502) is no longer running 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:50.266 00:09:50.266 real 0m3.010s 00:09:50.266 user 0m3.429s 00:09:50.266 sys 0m1.114s 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.266 20:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.266 ************************************ 00:09:50.266 END TEST default_locks 00:09:50.266 ************************************ 00:09:50.266 20:37:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:50.266 20:37:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.266 20:37:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.266 20:37:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.266 ************************************ 00:09:50.266 START TEST default_locks_via_rpc 00:09:50.266 ************************************ 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1594930 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1594930 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1594930 ']' 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.266 20:37:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.525 [2024-10-08 20:37:19.058747] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:50.525 [2024-10-08 20:37:19.058845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594930 ] 00:09:50.525 [2024-10-08 20:37:19.128606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.525 [2024-10-08 20:37:19.243132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1594930 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1594930 00:09:51.096 20:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:51.667 20:37:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1594930 00:09:51.667 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1594930 ']' 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1594930 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594930 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594930' 00:09:51.926 killing process with pid 1594930 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1594930 00:09:51.926 20:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1594930 00:09:52.497 00:09:52.497 real 0m2.196s 00:09:52.497 user 0m2.066s 00:09:52.497 sys 0m0.968s 00:09:52.497 20:37:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.497 20:37:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.497 ************************************ 00:09:52.497 END TEST default_locks_via_rpc 00:09:52.497 ************************************ 00:09:52.497 20:37:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:52.497 20:37:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.497 20:37:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.497 20:37:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.497 ************************************ 00:09:52.497 START TEST non_locking_app_on_locked_coremask 00:09:52.497 ************************************ 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1595222 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1595222 /var/tmp/spdk.sock 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1595222 ']' 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.497 20:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.758 [2024-10-08 20:37:21.310582] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:52.758 [2024-10-08 20:37:21.310714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595222 ] 00:09:52.758 [2024-10-08 20:37:21.417753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.019 [2024-10-08 20:37:21.644342] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1595364 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1595364 /var/tmp/spdk2.sock 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1595364 ']' 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.402 20:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:54.402 [2024-10-08 20:37:22.834074] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:54.402 [2024-10-08 20:37:22.834172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595364 ] 00:09:54.402 [2024-10-08 20:37:23.009359] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:54.402 [2024-10-08 20:37:23.009443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.973 [2024-10-08 20:37:23.467970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.914 20:37:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.914 20:37:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:55.914 20:37:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1595222 00:09:55.914 20:37:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1595222 00:09:55.914 20:37:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.295 lslocks: write error 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1595222 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1595222 ']' 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1595222 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595222 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595222' 00:09:57.295 killing process with pid 1595222 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1595222 00:09:57.295 20:37:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1595222 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1595364 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1595364 ']' 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1595364 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595364 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595364' 00:09:58.673 killing process with pid 1595364 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1595364 00:09:58.673 20:37:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1595364 00:09:59.612 00:09:59.612 real 0m6.756s 00:09:59.612 user 0m7.265s 00:09:59.612 sys 0m2.024s 00:09:59.612 20:37:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.612 20:37:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.612 ************************************ 00:09:59.612 END TEST non_locking_app_on_locked_coremask 00:09:59.612 ************************************ 00:09:59.612 20:37:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:59.612 20:37:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.612 20:37:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.612 20:37:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.612 ************************************ 00:09:59.613 START TEST locking_app_on_unlocked_coremask 00:09:59.613 ************************************ 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1596056 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1596056 /var/tmp/spdk.sock 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1596056 ']' 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.613 20:37:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.613 [2024-10-08 20:37:28.180111] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:09:59.613 [2024-10-08 20:37:28.180216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596056 ] 00:09:59.613 [2024-10-08 20:37:28.320510] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:59.613 [2024-10-08 20:37:28.320598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.873 [2024-10-08 20:37:28.538852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1596167 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1596167 /var/tmp/spdk2.sock 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1596167 ']' 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:00.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.443 20:37:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.443 [2024-10-08 20:37:29.078048] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:00.443 [2024-10-08 20:37:29.078150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596167 ] 00:10:00.703 [2024-10-08 20:37:29.255595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.963 [2024-10-08 20:37:29.706190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.904 20:37:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.904 20:37:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:01.904 20:37:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1596167 00:10:01.904 20:37:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1596167 00:10:01.904 20:37:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.284 lslocks: write error 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1596056 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1596056 ']' 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1596056 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596056 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596056' 00:10:03.284 killing process with pid 1596056 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1596056 00:10:03.284 20:37:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1596056 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1596167 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1596167 ']' 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1596167 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596167 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596167' 00:10:04.663 killing process with pid 1596167 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1596167 00:10:04.663 20:37:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1596167 00:10:05.606 00:10:05.606 real 0m5.968s 00:10:05.606 user 0m6.499s 00:10:05.606 sys 0m2.071s 00:10:05.607 20:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.607 20:37:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.607 ************************************ 00:10:05.607 END TEST locking_app_on_unlocked_coremask 00:10:05.607 ************************************ 00:10:05.607 20:37:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:05.607 20:37:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.607 20:37:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.607 20:37:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.607 ************************************ 00:10:05.607 START TEST locking_app_on_locked_coremask 00:10:05.607 ************************************ 00:10:05.607 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:05.607 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1596755 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1596755 /var/tmp/spdk.sock 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1596755 ']' 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.608 20:37:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.608 [2024-10-08 20:37:34.138836] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:05.608 [2024-10-08 20:37:34.138933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596755 ] 00:10:05.608 [2024-10-08 20:37:34.241144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.871 [2024-10-08 20:37:34.473204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1596895 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1596895 /var/tmp/spdk2.sock 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1596895 /var/tmp/spdk2.sock 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1596895 /var/tmp/spdk2.sock 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1596895 ']' 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.809 20:37:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.069 [2024-10-08 20:37:35.616801] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:07.069 [2024-10-08 20:37:35.616914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596895 ] 00:10:07.069 [2024-10-08 20:37:35.797419] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1596755 has claimed it. 00:10:07.069 [2024-10-08 20:37:35.797547] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:08.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1596895) - No such process 00:10:08.005 ERROR: process (pid: 1596895) is no longer running 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1596755 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1596755 00:10:08.005 20:37:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:08.572 lslocks: write error 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1596755 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1596755 ']' 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1596755 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596755 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596755' 00:10:08.572 killing process with pid 1596755 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1596755 00:10:08.572 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1596755 00:10:09.510 00:10:09.510 real 0m3.861s 00:10:09.510 user 0m4.804s 00:10:09.510 sys 0m1.062s 00:10:09.510 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.510 20:37:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 ************************************ 00:10:09.510 END TEST locking_app_on_locked_coremask 00:10:09.510 ************************************ 00:10:09.510 20:37:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:09.510 20:37:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:09.510 20:37:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.510 20:37:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 ************************************ 00:10:09.510 START TEST locking_overlapped_coremask 00:10:09.510 ************************************ 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1597192 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1597192 /var/tmp/spdk.sock 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1597192 ']' 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.510 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 [2024-10-08 20:37:38.144266] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:09.510 [2024-10-08 20:37:38.144443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597192 ] 00:10:09.510 [2024-10-08 20:37:38.264014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.772 [2024-10-08 20:37:38.504794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.772 [2024-10-08 20:37:38.504896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.772 [2024-10-08 20:37:38.504907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1597330 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1597330 /var/tmp/spdk2.sock 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1597330 /var/tmp/spdk2.sock 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1597330 /var/tmp/spdk2.sock 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1597330 ']' 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.343 20:37:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.343 [2024-10-08 20:37:39.010348] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:10.343 [2024-10-08 20:37:39.010460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597330 ] 00:10:10.602 [2024-10-08 20:37:39.161789] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1597192 has claimed it. 00:10:10.602 [2024-10-08 20:37:39.161846] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:11.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1597330) - No such process 00:10:11.168 ERROR: process (pid: 1597330) is no longer running 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1597192 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1597192 ']' 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1597192 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597192 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597192' 00:10:11.168 killing process with pid 1597192 00:10:11.168 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1597192 00:10:11.169 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1597192 00:10:12.107 00:10:12.107 real 0m2.517s 00:10:12.107 user 0m6.331s 00:10:12.107 sys 0m0.789s 00:10:12.107 20:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.107 20:37:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.107 ************************************ 00:10:12.107 END TEST locking_overlapped_coremask 00:10:12.107 ************************************ 00:10:12.107 20:37:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:12.107 20:37:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.107 20:37:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.107 20:37:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.107 ************************************ 00:10:12.107 START TEST locking_overlapped_coremask_via_rpc 00:10:12.108 ************************************ 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1597570 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1597570 /var/tmp/spdk.sock 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1597570 ']' 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.108 20:37:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.108 [2024-10-08 20:37:40.649297] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:12.108 [2024-10-08 20:37:40.649397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597570 ] 00:10:12.108 [2024-10-08 20:37:40.718080] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:12.108 [2024-10-08 20:37:40.718133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.108 [2024-10-08 20:37:40.837552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.108 [2024-10-08 20:37:40.837608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.108 [2024-10-08 20:37:40.837611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1597630 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1597630 /var/tmp/spdk2.sock 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1597630 ']' 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:12.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.701 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.701 [2024-10-08 20:37:41.294997] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:12.701 [2024-10-08 20:37:41.295115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597630 ] 00:10:12.701 [2024-10-08 20:37:41.448215] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:12.701 [2024-10-08 20:37:41.448314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.268 [2024-10-08 20:37:41.847192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.268 [2024-10-08 20:37:41.847246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:13.268 [2024-10-08 20:37:41.850659] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.229 [2024-10-08 20:37:42.933769] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1597570 has claimed it. 00:10:14.229 request: 00:10:14.229 { 00:10:14.229 "method": "framework_enable_cpumask_locks", 00:10:14.229 "req_id": 1 00:10:14.229 } 00:10:14.229 Got JSON-RPC error response 00:10:14.229 response: 00:10:14.229 { 00:10:14.229 "code": -32603, 00:10:14.229 "message": "Failed to claim CPU core: 2" 00:10:14.229 } 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1597570 /var/tmp/spdk.sock 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1597570 ']' 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.229 20:37:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1597630 /var/tmp/spdk2.sock 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1597630 ']' 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:14.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.794 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:15.358 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:15.358 00:10:15.359 real 0m3.271s 00:10:15.359 user 0m2.064s 00:10:15.359 sys 0m0.296s 00:10:15.359 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.359 20:37:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.359 ************************************ 00:10:15.359 END TEST locking_overlapped_coremask_via_rpc 00:10:15.359 ************************************ 00:10:15.359 20:37:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:15.359 20:37:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1597570 ]] 00:10:15.359 20:37:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1597570 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1597570 ']' 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1597570 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597570 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597570' 00:10:15.359 killing process with pid 1597570 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1597570 00:10:15.359 20:37:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1597570 00:10:15.924 20:37:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1597630 ]] 00:10:15.924 20:37:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1597630 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1597630 ']' 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1597630 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597630 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597630' 00:10:15.924 killing process with pid 1597630 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1597630 00:10:15.924 20:37:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1597630 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1597570 ]] 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1597570 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1597570 ']' 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1597570 00:10:16.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1597570) - No such process 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1597570 is not found' 00:10:16.492 Process with pid 1597570 is not found 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1597630 ]] 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1597630 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1597630 ']' 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1597630 00:10:16.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1597630) - No such process 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1597630 is not found' 00:10:16.492 Process with pid 1597630 is not found 00:10:16.492 20:37:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:16.492 00:10:16.492 real 0m29.485s 00:10:16.492 user 0m49.459s 00:10:16.492 sys 0m9.562s 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.492 20:37:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 ************************************ 00:10:16.492 END TEST cpu_locks 00:10:16.492 ************************************ 00:10:16.492 00:10:16.492 real 1m7.127s 00:10:16.492 user 2m7.165s 00:10:16.492 sys 0m16.976s 00:10:16.492 20:37:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.492 20:37:45 event -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 ************************************ 00:10:16.492 END TEST event 00:10:16.492 ************************************ 00:10:16.492 20:37:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:16.492 20:37:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:16.492 20:37:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.492 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 ************************************ 00:10:16.492 START TEST thread 00:10:16.492 ************************************ 00:10:16.492 20:37:45 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:16.492 * Looking for test storage... 00:10:16.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:16.493 20:37:45 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:16.493 20:37:45 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:10:16.493 20:37:45 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:16.753 20:37:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.753 20:37:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.753 20:37:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.753 20:37:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.753 20:37:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.753 20:37:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.753 20:37:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.753 20:37:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.753 20:37:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.753 20:37:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.753 20:37:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.753 20:37:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:16.753 20:37:45 thread -- scripts/common.sh@345 -- # : 1 00:10:16.753 20:37:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.753 20:37:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.753 20:37:45 thread -- scripts/common.sh@365 -- # decimal 1 00:10:16.753 20:37:45 thread -- scripts/common.sh@353 -- # local d=1 00:10:16.753 20:37:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.753 20:37:45 thread -- scripts/common.sh@355 -- # echo 1 00:10:16.753 20:37:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.753 20:37:45 thread -- scripts/common.sh@366 -- # decimal 2 00:10:16.753 20:37:45 thread -- scripts/common.sh@353 -- # local d=2 00:10:16.753 20:37:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.753 20:37:45 thread -- scripts/common.sh@355 -- # echo 2 00:10:16.753 20:37:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.753 20:37:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.753 20:37:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.753 20:37:45 thread -- scripts/common.sh@368 -- # return 0 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:16.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.753 --rc genhtml_branch_coverage=1 00:10:16.753 --rc genhtml_function_coverage=1 00:10:16.753 --rc genhtml_legend=1 00:10:16.753 --rc geninfo_all_blocks=1 00:10:16.753 --rc geninfo_unexecuted_blocks=1 00:10:16.753 00:10:16.753 ' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:16.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.753 --rc genhtml_branch_coverage=1 00:10:16.753 --rc genhtml_function_coverage=1 00:10:16.753 --rc genhtml_legend=1 00:10:16.753 --rc geninfo_all_blocks=1 00:10:16.753 --rc geninfo_unexecuted_blocks=1 00:10:16.753 00:10:16.753 ' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:16.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.753 --rc genhtml_branch_coverage=1 00:10:16.753 --rc genhtml_function_coverage=1 00:10:16.753 --rc genhtml_legend=1 00:10:16.753 --rc geninfo_all_blocks=1 00:10:16.753 --rc geninfo_unexecuted_blocks=1 00:10:16.753 00:10:16.753 ' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:16.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.753 --rc genhtml_branch_coverage=1 00:10:16.753 --rc genhtml_function_coverage=1 00:10:16.753 --rc genhtml_legend=1 00:10:16.753 --rc geninfo_all_blocks=1 00:10:16.753 --rc geninfo_unexecuted_blocks=1 00:10:16.753 00:10:16.753 ' 00:10:16.753 20:37:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.753 20:37:45 thread -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 ************************************ 00:10:16.753 START TEST thread_poller_perf 00:10:16.753 ************************************ 00:10:16.753 20:37:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:16.753 [2024-10-08 20:37:45.408454] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:16.753 [2024-10-08 20:37:45.408524] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598261 ] 00:10:16.753 [2024-10-08 20:37:45.508457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.013 [2024-10-08 20:37:45.730207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.013 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:18.390 [2024-10-08T18:37:47.153Z] ====================================== 00:10:18.390 [2024-10-08T18:37:47.153Z] busy:2730433932 (cyc) 00:10:18.390 [2024-10-08T18:37:47.153Z] total_run_count: 138000 00:10:18.390 [2024-10-08T18:37:47.153Z] tsc_hz: 2700000000 (cyc) 00:10:18.390 [2024-10-08T18:37:47.153Z] ====================================== 00:10:18.390 [2024-10-08T18:37:47.153Z] poller_cost: 19785 (cyc), 7327 (nsec) 00:10:18.390 00:10:18.390 real 0m1.562s 00:10:18.390 user 0m1.420s 00:10:18.390 sys 0m0.129s 00:10:18.390 20:37:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.390 20:37:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:18.390 ************************************ 00:10:18.390 END TEST thread_poller_perf 00:10:18.390 ************************************ 00:10:18.390 20:37:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:18.390 20:37:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:18.390 20:37:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.390 20:37:46 thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.390 ************************************ 00:10:18.390 START TEST thread_poller_perf 00:10:18.390 ************************************ 00:10:18.390 20:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:18.390 [2024-10-08 20:37:47.034497] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:18.390 [2024-10-08 20:37:47.034576] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598420 ] 00:10:18.390 [2024-10-08 20:37:47.106833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.649 [2024-10-08 20:37:47.263871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.649 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:20.027 [2024-10-08T18:37:48.790Z] ====================================== 00:10:20.027 [2024-10-08T18:37:48.790Z] busy:2705265927 (cyc) 00:10:20.027 [2024-10-08T18:37:48.790Z] total_run_count: 1811000 00:10:20.027 [2024-10-08T18:37:48.790Z] tsc_hz: 2700000000 (cyc) 00:10:20.027 [2024-10-08T18:37:48.790Z] ====================================== 00:10:20.027 [2024-10-08T18:37:48.790Z] poller_cost: 1493 (cyc), 552 (nsec) 00:10:20.027 00:10:20.027 real 0m1.449s 00:10:20.027 user 0m1.339s 00:10:20.027 sys 0m0.098s 00:10:20.027 20:37:48 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.027 20:37:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 ************************************ 00:10:20.027 END TEST thread_poller_perf 00:10:20.027 ************************************ 00:10:20.027 20:37:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:20.027 00:10:20.027 real 0m3.357s 00:10:20.027 user 0m2.939s 00:10:20.027 sys 0m0.413s 00:10:20.027 20:37:48 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.027 20:37:48 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 ************************************ 00:10:20.027 END TEST thread 00:10:20.027 ************************************ 00:10:20.027 20:37:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:20.027 20:37:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:20.027 20:37:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:20.027 20:37:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.027 20:37:48 -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 ************************************ 00:10:20.027 START TEST app_cmdline 00:10:20.027 ************************************ 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:20.027 * Looking for test storage... 00:10:20.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.027 20:37:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.027 --rc genhtml_branch_coverage=1 00:10:20.027 --rc genhtml_function_coverage=1 00:10:20.027 --rc genhtml_legend=1 00:10:20.027 --rc geninfo_all_blocks=1 00:10:20.027 --rc geninfo_unexecuted_blocks=1 00:10:20.027 00:10:20.027 ' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.027 --rc genhtml_branch_coverage=1 00:10:20.027 --rc genhtml_function_coverage=1 00:10:20.027 --rc genhtml_legend=1 00:10:20.027 --rc geninfo_all_blocks=1 00:10:20.027 --rc geninfo_unexecuted_blocks=1 00:10:20.027 00:10:20.027 ' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.027 --rc genhtml_branch_coverage=1 00:10:20.027 --rc genhtml_function_coverage=1 00:10:20.027 --rc genhtml_legend=1 00:10:20.027 --rc geninfo_all_blocks=1 00:10:20.027 --rc geninfo_unexecuted_blocks=1 00:10:20.027 00:10:20.027 ' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.027 --rc genhtml_branch_coverage=1 00:10:20.027 --rc genhtml_function_coverage=1 00:10:20.027 --rc genhtml_legend=1 00:10:20.027 --rc geninfo_all_blocks=1 00:10:20.027 --rc geninfo_unexecuted_blocks=1 00:10:20.027 00:10:20.027 ' 00:10:20.027 20:37:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:20.027 20:37:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1598638 00:10:20.027 20:37:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:20.027 20:37:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1598638 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1598638 ']' 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.027 20:37:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 [2024-10-08 20:37:48.853013] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:20.287 [2024-10-08 20:37:48.853158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598638 ] 00:10:20.287 [2024-10-08 20:37:48.970617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.546 [2024-10-08 20:37:49.192665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.115 20:37:49 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.115 20:37:49 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:21.115 20:37:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:21.684 { 00:10:21.684 "version": "SPDK v25.01-pre git sha1 716daf683", 00:10:21.684 "fields": { 00:10:21.684 "major": 25, 00:10:21.684 "minor": 1, 00:10:21.684 "patch": 0, 00:10:21.684 "suffix": "-pre", 00:10:21.684 "commit": "716daf683" 00:10:21.684 } 00:10:21.684 } 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:21.684 20:37:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:21.684 20:37:50 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.252 request: 00:10:22.252 { 00:10:22.252 "method": "env_dpdk_get_mem_stats", 00:10:22.252 "req_id": 1 00:10:22.252 } 00:10:22.252 Got JSON-RPC error response 00:10:22.252 response: 00:10:22.252 { 00:10:22.252 "code": -32601, 00:10:22.252 "message": "Method not found" 00:10:22.252 } 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:22.252 20:37:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1598638 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1598638 ']' 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1598638 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1598638 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1598638' 00:10:22.252 killing process with pid 1598638 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@969 -- # kill 1598638 00:10:22.252 20:37:50 app_cmdline -- common/autotest_common.sh@974 -- # wait 1598638 00:10:23.187 00:10:23.187 real 0m3.051s 00:10:23.187 user 0m4.066s 00:10:23.187 sys 0m0.841s 00:10:23.187 20:37:51 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.187 20:37:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:23.187 ************************************ 00:10:23.187 END TEST app_cmdline 00:10:23.187 ************************************ 00:10:23.187 20:37:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:23.187 20:37:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.187 20:37:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.187 20:37:51 -- common/autotest_common.sh@10 -- # set +x 00:10:23.187 ************************************ 00:10:23.187 START TEST version 00:10:23.187 ************************************ 00:10:23.187 20:37:51 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:23.187 * Looking for test storage... 00:10:23.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:23.187 20:37:51 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.187 20:37:51 version -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.187 20:37:51 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.187 20:37:51 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.187 20:37:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.187 20:37:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.187 20:37:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.187 20:37:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.187 20:37:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.187 20:37:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.187 20:37:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.187 20:37:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.187 20:37:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.187 20:37:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.187 20:37:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.187 20:37:51 version -- scripts/common.sh@344 -- # case "$op" in 00:10:23.187 20:37:51 version -- scripts/common.sh@345 -- # : 1 00:10:23.187 20:37:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.187 20:37:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.187 20:37:51 version -- scripts/common.sh@365 -- # decimal 1 00:10:23.187 20:37:51 version -- scripts/common.sh@353 -- # local d=1 00:10:23.187 20:37:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.187 20:37:51 version -- scripts/common.sh@355 -- # echo 1 00:10:23.187 20:37:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.187 20:37:51 version -- scripts/common.sh@366 -- # decimal 2 00:10:23.187 20:37:51 version -- scripts/common.sh@353 -- # local d=2 00:10:23.187 20:37:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.187 20:37:51 version -- scripts/common.sh@355 -- # echo 2 00:10:23.187 20:37:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.188 20:37:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.188 20:37:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.188 20:37:51 version -- scripts/common.sh@368 -- # return 0 00:10:23.188 20:37:51 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.188 20:37:51 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.188 --rc genhtml_branch_coverage=1 00:10:23.188 --rc genhtml_function_coverage=1 00:10:23.188 --rc genhtml_legend=1 00:10:23.188 --rc geninfo_all_blocks=1 00:10:23.188 --rc geninfo_unexecuted_blocks=1 00:10:23.188 00:10:23.188 ' 00:10:23.188 20:37:51 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.188 --rc genhtml_branch_coverage=1 00:10:23.188 --rc genhtml_function_coverage=1 00:10:23.188 --rc genhtml_legend=1 00:10:23.188 --rc geninfo_all_blocks=1 00:10:23.188 --rc geninfo_unexecuted_blocks=1 00:10:23.188 00:10:23.188 ' 00:10:23.188 20:37:51 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.188 --rc genhtml_branch_coverage=1 00:10:23.188 --rc genhtml_function_coverage=1 00:10:23.188 --rc genhtml_legend=1 00:10:23.188 --rc geninfo_all_blocks=1 00:10:23.188 --rc geninfo_unexecuted_blocks=1 00:10:23.188 00:10:23.188 ' 00:10:23.188 20:37:51 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.188 --rc genhtml_branch_coverage=1 00:10:23.188 --rc genhtml_function_coverage=1 00:10:23.188 --rc genhtml_legend=1 00:10:23.188 --rc geninfo_all_blocks=1 00:10:23.188 --rc geninfo_unexecuted_blocks=1 00:10:23.188 00:10:23.188 ' 00:10:23.188 20:37:51 version -- app/version.sh@17 -- # get_header_version major 00:10:23.188 20:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # cut -f2 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.188 20:37:51 version -- app/version.sh@17 -- # major=25 00:10:23.188 20:37:51 version -- app/version.sh@18 -- # get_header_version minor 00:10:23.188 20:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # cut -f2 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.188 20:37:51 version -- app/version.sh@18 -- # minor=1 00:10:23.188 20:37:51 version -- app/version.sh@19 -- # get_header_version patch 00:10:23.188 20:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # cut -f2 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.188 20:37:51 version -- app/version.sh@19 -- # patch=0 00:10:23.188 20:37:51 version -- app/version.sh@20 -- # get_header_version suffix 00:10:23.188 20:37:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # cut -f2 00:10:23.188 20:37:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.188 20:37:51 version -- app/version.sh@20 -- # suffix=-pre 00:10:23.188 20:37:51 version -- app/version.sh@22 -- # version=25.1 00:10:23.188 20:37:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:23.188 20:37:51 version -- app/version.sh@28 -- # version=25.1rc0 00:10:23.188 20:37:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:23.188 20:37:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:23.447 20:37:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:23.447 20:37:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:23.447 00:10:23.447 real 0m0.268s 00:10:23.447 user 0m0.177s 00:10:23.447 sys 0m0.123s 00:10:23.447 20:37:51 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.447 20:37:51 version -- common/autotest_common.sh@10 -- # set +x 00:10:23.447 ************************************ 00:10:23.447 END TEST version 00:10:23.447 ************************************ 00:10:23.447 20:37:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:23.447 20:37:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:23.447 20:37:51 -- spdk/autotest.sh@194 -- # uname -s 00:10:23.447 20:37:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:23.447 20:37:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:23.447 20:37:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:23.447 20:37:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:23.447 20:37:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:10:23.447 20:37:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:10:23.447 20:37:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.447 20:37:51 -- common/autotest_common.sh@10 -- # set +x 00:10:23.447 20:37:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:10:23.447 20:37:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:10:23.447 20:37:52 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:10:23.447 20:37:52 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:10:23.447 20:37:52 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:10:23.447 20:37:52 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:10:23.447 20:37:52 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:23.447 20:37:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.447 20:37:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.447 20:37:52 -- common/autotest_common.sh@10 -- # set +x 00:10:23.447 ************************************ 00:10:23.447 START TEST nvmf_tcp 00:10:23.447 ************************************ 00:10:23.447 20:37:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:23.447 * Looking for test storage... 00:10:23.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:23.447 20:37:52 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.447 20:37:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.447 20:37:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.707 20:37:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:23.707 20:37:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:23.707 20:37:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.707 20:37:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.707 ************************************ 00:10:23.707 START TEST nvmf_target_core 00:10:23.707 ************************************ 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:23.707 * Looking for test storage... 00:10:23.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.707 --rc genhtml_legend=1 00:10:23.707 --rc geninfo_all_blocks=1 00:10:23.707 --rc geninfo_unexecuted_blocks=1 00:10:23.707 00:10:23.707 ' 00:10:23.707 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.707 --rc genhtml_branch_coverage=1 00:10:23.707 --rc genhtml_function_coverage=1 00:10:23.708 --rc genhtml_legend=1 00:10:23.708 --rc geninfo_all_blocks=1 00:10:23.708 --rc geninfo_unexecuted_blocks=1 00:10:23.708 00:10:23.708 ' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.708 --rc genhtml_branch_coverage=1 00:10:23.708 --rc genhtml_function_coverage=1 00:10:23.708 --rc genhtml_legend=1 00:10:23.708 --rc geninfo_all_blocks=1 00:10:23.708 --rc geninfo_unexecuted_blocks=1 00:10:23.708 00:10:23.708 ' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.708 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.968 ************************************ 00:10:23.968 START TEST nvmf_abort 00:10:23.968 ************************************ 00:10:23.968 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:23.968 * Looking for test storage... 00:10:23.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.968 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:23.968 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:10:23.968 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.228 --rc genhtml_branch_coverage=1 00:10:24.228 --rc genhtml_function_coverage=1 00:10:24.228 --rc genhtml_legend=1 00:10:24.228 --rc geninfo_all_blocks=1 00:10:24.228 --rc geninfo_unexecuted_blocks=1 00:10:24.228 00:10:24.228 ' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.228 --rc genhtml_branch_coverage=1 00:10:24.228 --rc genhtml_function_coverage=1 00:10:24.228 --rc genhtml_legend=1 00:10:24.228 --rc geninfo_all_blocks=1 00:10:24.228 --rc geninfo_unexecuted_blocks=1 00:10:24.228 00:10:24.228 ' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.228 --rc genhtml_branch_coverage=1 00:10:24.228 --rc genhtml_function_coverage=1 00:10:24.228 --rc genhtml_legend=1 00:10:24.228 --rc geninfo_all_blocks=1 00:10:24.228 --rc geninfo_unexecuted_blocks=1 00:10:24.228 00:10:24.228 ' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.228 --rc genhtml_branch_coverage=1 00:10:24.228 --rc genhtml_function_coverage=1 00:10:24.228 --rc genhtml_legend=1 00:10:24.228 --rc geninfo_all_blocks=1 00:10:24.228 --rc geninfo_unexecuted_blocks=1 00:10:24.228 00:10:24.228 ' 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.228 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.229 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:27.518 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:27.518 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:27.518 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:27.519 Found net devices under 0000:84:00.0: cvl_0_0 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:27.519 Found net devices under 0000:84:00.1: cvl_0_1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:27.519 00:10:27.519 --- 10.0.0.2 ping statistics --- 00:10:27.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.519 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:27.519 00:10:27.519 --- 10.0.0.1 ping statistics --- 00:10:27.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.519 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1600996 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1600996 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1600996 ']' 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.519 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.519 [2024-10-08 20:37:55.811929] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:27.519 [2024-10-08 20:37:55.812009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.519 [2024-10-08 20:37:55.924078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.519 [2024-10-08 20:37:56.148090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.519 [2024-10-08 20:37:56.148214] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.519 [2024-10-08 20:37:56.148251] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.519 [2024-10-08 20:37:56.148291] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.519 [2024-10-08 20:37:56.148306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.519 [2024-10-08 20:37:56.150041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.519 [2024-10-08 20:37:56.150144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.519 [2024-10-08 20:37:56.150149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.453 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.453 [2024-10-08 20:37:57.209121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 Malloc0 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 Delay0 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 [2024-10-08 20:37:57.282075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.711 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:28.711 [2024-10-08 20:37:57.387566] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:31.240 Initializing NVMe Controllers 00:10:31.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:31.240 controller IO queue size 128 less than required 00:10:31.240 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:31.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:31.240 Initialization complete. Launching workers. 00:10:31.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28117 00:10:31.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28182, failed to submit 62 00:10:31.240 success 28121, unsuccessful 61, failed 0 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.240 rmmod nvme_tcp 00:10:31.240 rmmod nvme_fabrics 00:10:31.240 rmmod nvme_keyring 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1600996 ']' 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1600996 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1600996 ']' 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1600996 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600996 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600996' 00:10:31.240 killing process with pid 1600996 00:10:31.240 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1600996 00:10:31.241 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1600996 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.499 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.034 00:10:34.034 real 0m9.724s 00:10:34.034 user 0m14.943s 00:10:34.034 sys 0m3.585s 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.034 ************************************ 00:10:34.034 END TEST nvmf_abort 00:10:34.034 ************************************ 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.034 ************************************ 00:10:34.034 START TEST nvmf_ns_hotplug_stress 00:10:34.034 ************************************ 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:34.034 * Looking for test storage... 00:10:34.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.034 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.035 --rc genhtml_branch_coverage=1 00:10:34.035 --rc genhtml_function_coverage=1 00:10:34.035 --rc genhtml_legend=1 00:10:34.035 --rc geninfo_all_blocks=1 00:10:34.035 --rc geninfo_unexecuted_blocks=1 00:10:34.035 00:10:34.035 ' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.035 --rc genhtml_branch_coverage=1 00:10:34.035 --rc genhtml_function_coverage=1 00:10:34.035 --rc genhtml_legend=1 00:10:34.035 --rc geninfo_all_blocks=1 00:10:34.035 --rc geninfo_unexecuted_blocks=1 00:10:34.035 00:10:34.035 ' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.035 --rc genhtml_branch_coverage=1 00:10:34.035 --rc genhtml_function_coverage=1 00:10:34.035 --rc genhtml_legend=1 00:10:34.035 --rc geninfo_all_blocks=1 00:10:34.035 --rc geninfo_unexecuted_blocks=1 00:10:34.035 00:10:34.035 ' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.035 --rc genhtml_branch_coverage=1 00:10:34.035 --rc genhtml_function_coverage=1 00:10:34.035 --rc genhtml_legend=1 00:10:34.035 --rc geninfo_all_blocks=1 00:10:34.035 --rc geninfo_unexecuted_blocks=1 00:10:34.035 00:10:34.035 ' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.035 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.322 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:37.323 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:37.323 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:37.323 Found net devices under 0000:84:00.0: cvl_0_0 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:37.323 Found net devices under 0000:84:00.1: cvl_0_1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:10:37.323 00:10:37.323 --- 10.0.0.2 ping statistics --- 00:10:37.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.323 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:10:37.323 00:10:37.323 --- 10.0.0.1 ping statistics --- 00:10:37.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.323 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.323 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1603637 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1603637 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1603637 ']' 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.324 20:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.324 [2024-10-08 20:38:05.752241] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:10:37.324 [2024-10-08 20:38:05.752420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.324 [2024-10-08 20:38:05.912597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.582 [2024-10-08 20:38:06.138118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.582 [2024-10-08 20:38:06.138231] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.582 [2024-10-08 20:38:06.138269] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.582 [2024-10-08 20:38:06.138307] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.582 [2024-10-08 20:38:06.138335] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.582 [2024-10-08 20:38:06.139572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.582 [2024-10-08 20:38:06.139632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.582 [2024-10-08 20:38:06.139635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:37.582 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.152 [2024-10-08 20:38:06.890919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.412 20:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:38.670 20:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.927 [2024-10-08 20:38:07.545766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.927 20:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.493 20:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:39.752 Malloc0 00:10:39.752 20:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:40.010 Delay0 00:10:40.010 20:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.575 20:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:41.141 NULL1 00:10:41.141 20:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:41.399 20:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1604190 00:10:41.399 20:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:41.399 20:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:41.399 20:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.838 Read completed with error (sct=0, sc=11) 00:10:42.838 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.838 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:42.838 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:43.404 true 00:10:43.404 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:43.404 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.970 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.228 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:44.228 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:44.795 true 00:10:44.795 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:44.795 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.361 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.620 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:45.620 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:46.186 true 00:10:46.186 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:46.186 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.754 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.012 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:47.012 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:47.271 true 00:10:47.271 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:47.271 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.836 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.094 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:48.094 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:48.351 true 00:10:48.351 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:48.351 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.610 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.176 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:49.176 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:49.433 true 00:10:49.433 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:49.433 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.691 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.949 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:49.949 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:50.514 true 00:10:50.514 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:50.514 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.771 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.028 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:51.028 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:51.593 true 00:10:51.593 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:51.593 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.159 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.424 [2024-10-08 20:38:21.103534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.103972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.104927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.105941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.106956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.107983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.108977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.109043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.109102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.109157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.424 [2024-10-08 20:38:21.109215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.109771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.110957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.111942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.112943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.113946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.114998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.115992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.116440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.117123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.425 [2024-10-08 20:38:21.117187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.117926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.118994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.119926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.120982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:52.426 [2024-10-08 20:38:21.121623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.121987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.122919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.123531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.124188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.124258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.124329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.124389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.426 [2024-10-08 20:38:21.124448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.124936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.125993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:52.427 [2024-10-08 20:38:21.126644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:52.427 [2024-10-08 20:38:21.126784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.126976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.127949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.128989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.427 [2024-10-08 20:38:21.129838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.129905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.129982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.130918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.131379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.132994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.133965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.134927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.135981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.136934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.428 [2024-10-08 20:38:21.137412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.137920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.138441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.139923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.140965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.141993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.142986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.143940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.429 [2024-10-08 20:38:21.144849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.144912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.144986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.145998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.146922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.147963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.148922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.149923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.150971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.151941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.152954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.430 [2024-10-08 20:38:21.153031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.153927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.154950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.155950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.156912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.157985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.431 [2024-10-08 20:38:21.158809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.158869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.158929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.159919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.160660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.160730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.160793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.160868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.160930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.161979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.162982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.163942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.164918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.165974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.166048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.166108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.166170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.432 [2024-10-08 20:38:21.166235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.166980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.167976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.168955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.169963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.170984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.171925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.172908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.433 [2024-10-08 20:38:21.173739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.174978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.175984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.176980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.177982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.178967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.434 [2024-10-08 20:38:21.179448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.179990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.180879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.181970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.182974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.718 [2024-10-08 20:38:21.183579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.183660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.183747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.183810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.183874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.183937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.184996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.185932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.186966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.187931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.188994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.189922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.719 [2024-10-08 20:38:21.190706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.190767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.190829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.190893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.190971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.191876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:52.720 [2024-10-08 20:38:21.192621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.192941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.193978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.194037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.194106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.194172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.194925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.195984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.196930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.197954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.198029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.198093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.720 [2024-10-08 20:38:21.198155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.198949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.199952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.200919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.201973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.202951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.203990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.204941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.721 [2024-10-08 20:38:21.205425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.205959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.206985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.207656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.208993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.209979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.210990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.722 [2024-10-08 20:38:21.211472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.211947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.212031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.212089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.212146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.213959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.214973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.215983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.216954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.217947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.218937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.723 [2024-10-08 20:38:21.219347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.219922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.220942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.221938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.222593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.223987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.224963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.225982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.724 [2024-10-08 20:38:21.226515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.226978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.227982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.228943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.229998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.230948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.231334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.232973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.233940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.725 [2024-10-08 20:38:21.234305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.234928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.235987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.236982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.237996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.238954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.239944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.726 [2024-10-08 20:38:21.240803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.240866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.240930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.241789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.242991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.243934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.244995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.245941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.246999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.247550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.727 [2024-10-08 20:38:21.248471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.248973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.249998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.250990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.251972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.252925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.253972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.254940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.255014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.255071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.728 [2024-10-08 20:38:21.255130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.255985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.256060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.256122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.256182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.257980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.258992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.259975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.260969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 Message suppressed 999 times: [2024-10-08 20:38:21.261720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 Read completed with error (sct=0, sc=15) 00:10:52.729 [2024-10-08 20:38:21.261784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.261993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.262942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.263017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.263071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.263126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.729 [2024-10-08 20:38:21.263185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.263942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.264981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.265962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.266811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.267998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.268948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.730 [2024-10-08 20:38:21.269954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.270851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.271676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.271753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.271813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.271874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.271937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.272974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.273953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.274989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.275913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.731 [2024-10-08 20:38:21.276717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.276781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.276843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.276903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.277964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.278959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.279968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.280990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.281994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.282931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.732 [2024-10-08 20:38:21.283777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.283842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.283897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.283951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.284024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.284081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.284136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.284191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.284250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.285992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.286972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.287932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.288974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.289991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.290990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.733 [2024-10-08 20:38:21.291619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.291702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.291770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.291836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.291897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.291960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.292925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.293950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.294592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.295968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.296925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.297986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.734 [2024-10-08 20:38:21.298933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.299993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.300968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.301966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.302979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.303960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.304858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.735 [2024-10-08 20:38:21.305723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.305786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.305857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.305922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.306983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.307936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.308984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.309928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.310499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.311995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.312970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.736 [2024-10-08 20:38:21.313030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.313925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.314955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.315948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.316998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.317945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.318950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.319010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.319077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.319135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.319195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.737 [2024-10-08 20:38:21.320529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.320984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.321979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.322962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.323977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.324951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.325988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.326996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.738 [2024-10-08 20:38:21.327740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.327801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.327853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.327911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.327986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.328987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.329979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.330784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.331937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.332990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.739 [2024-10-08 20:38:21.333534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.333925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.334989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:52.740 [2024-10-08 20:38:21.335532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.335972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.336940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.337949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.338429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.339927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.740 [2024-10-08 20:38:21.340820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.340882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.340945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.341948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.342972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.343948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.344022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.344085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.344158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.344218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.344935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.345937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.346938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.347982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.348044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.348102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.741 [2024-10-08 20:38:21.348161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.348920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.349982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.350965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.351972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.352988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.353069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.353956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.354983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.742 [2024-10-08 20:38:21.355861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.355926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.355987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.356984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.357964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.358992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.359977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.360946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.361993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.362057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.362124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.362188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.362249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.743 [2024-10-08 20:38:21.362306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.362951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.363683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.364994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.365942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.366991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.367976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.368945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.744 [2024-10-08 20:38:21.369630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.369696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.369766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.369820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.369880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.369945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.370932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.371937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.372986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.373960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.374975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.375964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.376964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.377027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.745 [2024-10-08 20:38:21.377088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.377156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.377908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.377979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.378942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.379999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.380978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.381950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.382951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.383943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.746 [2024-10-08 20:38:21.384627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.384694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.384756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.384818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.384880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.384941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.385999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.386069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.386133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.386982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.387959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.388996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.389976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.390995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.391955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.392014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.392074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.392128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.392194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.747 [2024-10-08 20:38:21.392638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.392714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.392779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.392841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.392909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.392972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.393940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.394993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.395987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.396938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.397962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.748 [2024-10-08 20:38:21.398831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.398891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.398950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.399767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.400961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.401957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.402991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.403974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.404984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:52.749 [2024-10-08 20:38:21.405099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.405987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.749 [2024-10-08 20:38:21.406340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.406852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.407953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.408991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.409980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.410954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.411950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.412957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.413387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.414183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.414250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.414324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.750 [2024-10-08 20:38:21.414387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.414963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.415954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.416975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.417956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.418966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.419955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.420948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.421007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.751 [2024-10-08 20:38:21.421068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.421962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.422026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.422088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.422156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.422221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.422288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.423974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.424971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.425972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.426959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.752 [2024-10-08 20:38:21.427971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.428950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.429962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.430983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.431975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.432947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.433952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.434980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.435043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.435114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.435182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.435245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.753 [2024-10-08 20:38:21.435308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.435978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.436041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.436923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.436987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.437956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.438941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.439977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.440938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.441943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.442967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.754 [2024-10-08 20:38:21.443457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.443955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.444948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.445996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.446949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.447961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.448943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.449954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.450014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.450075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.450134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.450199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.755 [2024-10-08 20:38:21.450252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.450810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.451991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.452973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.453964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:52.756 [2024-10-08 20:38:21.454937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.454999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.455069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.455135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.455196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.455261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.455324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.456219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.031 [2024-10-08 20:38:21.456284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.456957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.457996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.458961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.459973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.460952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 true 00:10:53.032 [2024-10-08 20:38:21.461399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.461999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.462971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.463032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.032 [2024-10-08 20:38:21.463093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.463975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.464530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.465981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.466997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.467979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.468970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.469967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.033 [2024-10-08 20:38:21.470388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.470444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.470503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.470563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.471994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.472976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.473958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.474990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:53.034 [2024-10-08 20:38:21.475611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.475946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.476957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:53.034 [2024-10-08 20:38:21.477079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.034 [2024-10-08 20:38:21.477213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.034 [2024-10-08 20:38:21.477834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.477891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.477955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.478964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.479976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.480849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.481973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.482950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.483953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.035 [2024-10-08 20:38:21.484431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.484952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.485023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.485865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.485935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.485998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.486933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.487984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.488947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.489870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.036 [2024-10-08 20:38:21.490776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.490839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.490898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.490960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.491990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.492964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.493991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.494945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.495554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.496986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.497982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.498042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.498098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.498155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.498218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.037 [2024-10-08 20:38:21.498281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.498997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.499977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.500966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.501973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.502968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.503990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.038 [2024-10-08 20:38:21.504362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.504935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.505945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.506975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.507961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.508952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.509998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.510060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.510126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.510987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.511949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.039 [2024-10-08 20:38:21.512015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.512971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.513940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.514936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.515952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.516996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.040 [2024-10-08 20:38:21.517874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.517951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.518951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.519315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.520990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.521989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.522946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.523970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.524992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.041 [2024-10-08 20:38:21.525406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.525929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.526998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.527991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.528989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.529863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.530970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.531987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.532941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.533031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.533094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.042 [2024-10-08 20:38:21.533153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.533973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.534990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.535737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.536977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.537993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.538987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.539995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.540053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.540109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.540171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.540383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.043 [2024-10-08 20:38:21.540445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.540973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.541938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.542979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.543036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.543095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.543155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.543937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.544937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.545969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.546048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.546108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.546167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.044 [2024-10-08 20:38:21.546224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.546932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.547945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:53.045 [2024-10-08 20:38:21.548522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.548980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.549943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.550970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.551930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.045 [2024-10-08 20:38:21.552487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.552975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.553938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.554928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.555937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.556869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.557930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.558932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.046 [2024-10-08 20:38:21.559901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.559976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.560952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.561966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.562950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.563989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.564952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.565534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.566939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.567032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.567093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.047 [2024-10-08 20:38:21.567155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.567969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.568977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.569950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.570921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.571960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.572961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.573932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.574015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.574073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.574133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.574194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.048 [2024-10-08 20:38:21.574256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.574593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.575974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.576986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.577927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.578987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.579977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.580941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.049 [2024-10-08 20:38:21.581311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.581926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.582969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.583907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.584929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.585932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.586996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.587948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.588009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.588068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.588139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.050 [2024-10-08 20:38:21.588198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.588977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.589992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.590943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.591975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.592526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.593999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.594991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.595053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.595113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.051 [2024-10-08 20:38:21.595172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.595948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.596948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.597965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.598991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.599956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.600998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.601974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.052 [2024-10-08 20:38:21.602570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.602945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.603941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.604991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.605981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.606966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.607980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.608954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.053 [2024-10-08 20:38:21.609043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.609974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.610961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.611813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.612952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.613995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.614977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.615969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.054 [2024-10-08 20:38:21.616517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.616576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.616641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.616855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.616917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.616977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:53.055 [2024-10-08 20:38:21.617351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.617776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.618967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.619955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.620971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.621943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.622985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.055 [2024-10-08 20:38:21.623909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.623976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.624957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.625517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.626986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.627990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.628975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.629956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.056 [2024-10-08 20:38:21.630522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.630964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.631026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.631101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.631166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.631231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.631294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.632984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.633991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.634954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.635974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.636956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.637951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.638014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.057 [2024-10-08 20:38:21.638076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.638964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.639954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.640938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.641984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.642959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.643933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.644004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.644074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.058 [2024-10-08 20:38:21.644135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.644788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.645965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.646957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.647963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.648980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.059 [2024-10-08 20:38:21.649666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:53.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.994 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.510 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:54.510 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:54.768 true 00:10:54.768 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:54.768 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.334 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.851 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:55.851 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:56.109 true 00:10:56.109 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:56.109 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.042 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.300 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:57.300 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:57.559 true 00:10:57.559 20:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:57.559 20:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.126 20:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.385 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:58.385 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:58.951 true 00:10:58.951 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:10:58.951 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.322 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.837 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:00.837 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:01.404 true 00:11:01.404 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:01.404 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.777 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.777 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:02.777 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:03.359 true 00:11:03.359 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:03.359 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.990 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.265 [2024-10-08 20:38:32.811110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.811927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.812993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.813992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.265 [2024-10-08 20:38:32.814793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.814852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.814912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.814991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.815939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.816960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.817962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.818980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.819985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.820996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.821995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.266 [2024-10-08 20:38:32.822365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.822953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.823944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.824022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.824865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.824933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.825943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.826963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.827971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.828873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 Message suppressed 999 times: [2024-10-08 20:38:32.829522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 Read completed with error (sct=0, sc=15) 00:11:04.267 [2024-10-08 20:38:32.829595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.829947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.267 [2024-10-08 20:38:32.830016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.830934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:04.268 [2024-10-08 20:38:32.831664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:04.268 [2024-10-08 20:38:32.831799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.831926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.832921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.833013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.833074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.833137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.833204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.833265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.834970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.835998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.836977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.837052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.268 [2024-10-08 20:38:32.837116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.837924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.838925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.839954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.840986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.841941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.842957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.269 [2024-10-08 20:38:32.843014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.843958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.844940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.845985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.846963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.847722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.848977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.849985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.850055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.850112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.850169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.270 [2024-10-08 20:38:32.850226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.850966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.851994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.852059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.852124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.852183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.852244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.853961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.854955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.855968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.271 [2024-10-08 20:38:32.856946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.857937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.858967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.859987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.860995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.861955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.862983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.272 [2024-10-08 20:38:32.863888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.863957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.864978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.865529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.866970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.867990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.868968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.869930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.870956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.871029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.871088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.273 [2024-10-08 20:38:32.871148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.871207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.871927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.872959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.873956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.874961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.875917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.876989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.274 [2024-10-08 20:38:32.877529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.877962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.878931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.879897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.880981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.881939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.882945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.883967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.884029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.884084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.884140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.275 [2024-10-08 20:38:32.884199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.884264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.884322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.884386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.884926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.885948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.886973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.887974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.888800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.889979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.890931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.276 [2024-10-08 20:38:32.891289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.891938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.892961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.893020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.893077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.893138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.893196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.893252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.894968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.895951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.896932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.897953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.277 [2024-10-08 20:38:32.898699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.277 [2024-10-08 20:38:32.898767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.898826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.898880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.898934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.899943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.900997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.901937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.902014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.902075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.902864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.902937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.903943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.278 [2024-10-08 20:38:32.904945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.905945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.906972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.907809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.908979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.909990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.910927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.279 [2024-10-08 20:38:32.911761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.911820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.911878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.911938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.912990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.913926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.914946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.915021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.915085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.915135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.915207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.916994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.917994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.918950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.919024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.919081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.919139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.919197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.280 [2024-10-08 20:38:32.919253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.919938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.920987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.921974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.922933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.923965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.924970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.925948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.281 [2024-10-08 20:38:32.926307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.926934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.927985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.928579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.929936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.930985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.931996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.282 [2024-10-08 20:38:32.932353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.932922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.933914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.934310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.935980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.936946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.937945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.938958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.283 [2024-10-08 20:38:32.939457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.939978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.940989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.941988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.942967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.943953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.944933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.945948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.946023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.946082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.946143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.946197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.284 [2024-10-08 20:38:32.946253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.946983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.947490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.948972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.949966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.950957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.951927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.952974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.285 [2024-10-08 20:38:32.953671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.953726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.953786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.953844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.953909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.953983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.954948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.955930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.956551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.957928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.958951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.959940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.286 [2024-10-08 20:38:32.960923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.961958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.962955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.963995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.964947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.965483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.287 [2024-10-08 20:38:32.966984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.967929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.968970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.969881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.288 [2024-10-08 20:38:32.970716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.970781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.970847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.970915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.970978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.971976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.972985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.973995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.974057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.288 [2024-10-08 20:38:32.974116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.974930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.975941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.976969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.977969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.978804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.979928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.980967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.981026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.981085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.981144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.981204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.289 [2024-10-08 20:38:32.981267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.981983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.982954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.983031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.983093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.983157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.983215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.983274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.984983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.985928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.986932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.987972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.290 [2024-10-08 20:38:32.988713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.988781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.988842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.988912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.988989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.989929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.990986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.991964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.992951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.291 [2024-10-08 20:38:32.993884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.993957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.994976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.995986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.996474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.997943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.998932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:32.999967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.292 [2024-10-08 20:38:33.000844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.000903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.000975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.001802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.001866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.001924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.001997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.002962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.003996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.004997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.005932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.006980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.007980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.293 [2024-10-08 20:38:33.008352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.008937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.009872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.010689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.294 [2024-10-08 20:38:33.010754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.010812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.010869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.010929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.011924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.012961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.013038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.013115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.013177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.013238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.579 [2024-10-08 20:38:33.013296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.013945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.014984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.015971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.016998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.017974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.018928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.019001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.019844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.019913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.019978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.580 [2024-10-08 20:38:33.020603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.020692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.020754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.020816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.020881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.020957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.021930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.022946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.023786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.024991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.025974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.026943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.581 [2024-10-08 20:38:33.027022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.027971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.028795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.028863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.028921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.028987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.029987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.030960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.031944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.032983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.033998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.582 [2024-10-08 20:38:33.034484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.034938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.035986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.036841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.583 [2024-10-08 20:38:33.037760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.037954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.038974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.039993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.583 [2024-10-08 20:38:33.040544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.040959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.041021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.041080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.041143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.041203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.041261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.042931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.043980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.044939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.045959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.046927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.584 [2024-10-08 20:38:33.047931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.048978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.049958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.050984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.051975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.052976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.053936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.585 [2024-10-08 20:38:33.054292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.054355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.054414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.054472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.054532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.054599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.055944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.056990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.057936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.058975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.059990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.060951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.061024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.061082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.061139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.061197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.586 [2024-10-08 20:38:33.061253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.061323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.061380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.061448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.061513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.061573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.062996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.063985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.064948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.065965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.066984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.067974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.068032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.068091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.068151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.068213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.068991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.587 [2024-10-08 20:38:33.069388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.069944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.070975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.071964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.072946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.073943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.074993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.075058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.588 [2024-10-08 20:38:33.075121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.075946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.076997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.077060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.077127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.077186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.077263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.077342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.078960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.079945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.080984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.081948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.589 [2024-10-08 20:38:33.082923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.082986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.083964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.084963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.085969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.086659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.087997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.088942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.089952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.090021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.090083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.590 [2024-10-08 20:38:33.090146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.090994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.091966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.092734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.093968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.094951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.095993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.096979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.591 [2024-10-08 20:38:33.097561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.097936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.098986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.099980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.100439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.101967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.102940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.103939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.104001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.104063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.592 [2024-10-08 20:38:33.104138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.104972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.105997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.106511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.107957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.108964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.109976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.110948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.111016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.111073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.111292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.593 [2024-10-08 20:38:33.111356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.594 [2024-10-08 20:38:33.111927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.111990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.112946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.113966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.114995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.115961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.116951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.117963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.594 [2024-10-08 20:38:33.118630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.118701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.118774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.118836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.118899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.118961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.119963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.120783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.120852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.120913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.120974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.121980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.122982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.123951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.124717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.125986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.595 [2024-10-08 20:38:33.126634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.126704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.126767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.126835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.126899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.126962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.127951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.128993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.129955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.130947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.131991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.596 [2024-10-08 20:38:33.132982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.133504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.134951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.135935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.136954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.137933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.138975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.139955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.140013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.140068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.140126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.140186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.597 [2024-10-08 20:38:33.140249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.140995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.141976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.142833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.143767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.143831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.143895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.143967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 true 00:11:04.598 [2024-10-08 20:38:33.144094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.144969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.145982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.146981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.598 [2024-10-08 20:38:33.147658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.147725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.147790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.148998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.149971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.150969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.151944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.152008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.152070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.152948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.153938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.154999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.599 [2024-10-08 20:38:33.155381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.155961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.156909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.157983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.158995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.159975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:04.600 [2024-10-08 20:38:33.160208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.600 [2024-10-08 20:38:33.160333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.160980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.161982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.600 [2024-10-08 20:38:33.162679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.162742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.162804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.163957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.164934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.165999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.166799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.166861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.166918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.166979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.167957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.168998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.601 [2024-10-08 20:38:33.169684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.169744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.169806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.169867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.169924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.169985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.170968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.171930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.172938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.173387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.174942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.175939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.176995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.602 [2024-10-08 20:38:33.177055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.177987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.178970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.179975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.180996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.181936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.182985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.183989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.603 [2024-10-08 20:38:33.184659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.184722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.184783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.184849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.184920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.184987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.604 [2024-10-08 20:38:33.185205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.185996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.186946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.187976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.188965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.189970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.190967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.191959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.604 [2024-10-08 20:38:33.192018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.192980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.193958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.194996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.195939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.196944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.197957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.605 [2024-10-08 20:38:33.198508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.198944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.199992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.200765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.201959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.202974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.203949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.204989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.205585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.606 [2024-10-08 20:38:33.206477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.206969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.207962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.208958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.209968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.210947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.211987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.212965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.607 [2024-10-08 20:38:33.213024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.213989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.214442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.215971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.216960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.217954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.218943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.219954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.608 [2024-10-08 20:38:33.220561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.221976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.222993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.223933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.224982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.225973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.609 [2024-10-08 20:38:33.226598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.226981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.227979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.228042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.228109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.228172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.228972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.229952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.230959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.231980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.232938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.610 [2024-10-08 20:38:33.233013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.233919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.234933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.235981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.236931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.237949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.238988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.611 [2024-10-08 20:38:33.239598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.239678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.239741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.239948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.240930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.241666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.242949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.243969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.244925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.245997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.246993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.247054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.247112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.612 [2024-10-08 20:38:33.247184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.247989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.248946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.249970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.250453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.251990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.252995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.253979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.254041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.254106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.254164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.613 [2024-10-08 20:38:33.254222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.254939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.614 [2024-10-08 20:38:33.255846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.255913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.255997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.256947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.257961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.258929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.259997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.614 [2024-10-08 20:38:33.260825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.260886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.260953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.261988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.262970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.263903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.264993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.265970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.266929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.615 [2024-10-08 20:38:33.267915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.267986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.268994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.269955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.270988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.271948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.272899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.273701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.273764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.273824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.273883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.273941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.274943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.616 [2024-10-08 20:38:33.275629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.275714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.275778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.275840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.275905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.275982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.276947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.277994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.278923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.279925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.280968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.281730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.282271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.617 [2024-10-08 20:38:33.282342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.282997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.283986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.284996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.285939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.286982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.287927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.618 [2024-10-08 20:38:33.288459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.288991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.289951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.290564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.291989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.292935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.293961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.294942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.619 [2024-10-08 20:38:33.295994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.296998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.297990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.298932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.299965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.300943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.301945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.302980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.303040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.303107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.620 [2024-10-08 20:38:33.303168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.303936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.304989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.305050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.305856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.305926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.305999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.306932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.307936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.308974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.309031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.309093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.309148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.621 [2024-10-08 20:38:33.309202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.309993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.310987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.311967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.312999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.313937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.314011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.314088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.314148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.314207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.622 [2024-10-08 20:38:33.314268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.314942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.315950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.895 [2024-10-08 20:38:33.316672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.316736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.316809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.316881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.316944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.317815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.318939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.319978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.320982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.321951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.322954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.896 [2024-10-08 20:38:33.323747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.323810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.323879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.323936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.323996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.324670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.897 [2024-10-08 20:38:33.325386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.325990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.326961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.327944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.328965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.329974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.897 [2024-10-08 20:38:33.330485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.330974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.331645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.332954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.333953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.334949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.335978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.336997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.337937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.898 [2024-10-08 20:38:33.338001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.338561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.339996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.340984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.341949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.342983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.343967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.344952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.345018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.345090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.345151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.345216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.899 [2024-10-08 20:38:33.345280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.345345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.345399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.345457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.345516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.345989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.346982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.347964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.348983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.349965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.350987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.900 [2024-10-08 20:38:33.351570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.351976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.352040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.352107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.352173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.352927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.352992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.353993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.354976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.355972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.356936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.357955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.358946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.359019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.359083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.901 [2024-10-08 20:38:33.359146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.359983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.360960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.361949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.362943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.363960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.364976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.365765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.902 [2024-10-08 20:38:33.366964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.367982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.368928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.369941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.370988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.371760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.372969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.903 [2024-10-08 20:38:33.373972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.374982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.375988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.376948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.377970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.378993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.379935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.380012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.380071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.380129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.380194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.904 [2024-10-08 20:38:33.380260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.380996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.381926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.382960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.383825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.384937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.385930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.386928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.905 [2024-10-08 20:38:33.387558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.387976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.388986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.389976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.390963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.391983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.392600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.393992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.394960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.395035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.906 [2024-10-08 20:38:33.395094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.395987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.396938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.907 [2024-10-08 20:38:33.397725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.397996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.398610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.399981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.400966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.907 [2024-10-08 20:38:33.401785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.401847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.401908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.401985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.402965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.403968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.404994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.405946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.406018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.908 [2024-10-08 20:38:33.406073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.406989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.407055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.407107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.407951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.408983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.409949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.410977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.411989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.412957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.909 [2024-10-08 20:38:33.413557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.413941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.414935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.415989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.416053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.416114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.416176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.417932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.418989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.419934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.910 [2024-10-08 20:38:33.420497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.420959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.421974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.422965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.423983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.424948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.425979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.426948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.911 [2024-10-08 20:38:33.427819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.427881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.427939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.428950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.429931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.430815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.431933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.432938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.433950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.434991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.435053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.912 [2024-10-08 20:38:33.435112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.435999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.436995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.437923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.438983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.439757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.440955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.913 [2024-10-08 20:38:33.441921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.441997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.442949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.443970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.444944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.445947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.446945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.447980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.914 [2024-10-08 20:38:33.448364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.448951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.449781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.449837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.449905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.449990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.450998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.451980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.452929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.453813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.454974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.455994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.456052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.915 [2024-10-08 20:38:33.456103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.456976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.457992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.458978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.459970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.460962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.461995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.462477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.916 [2024-10-08 20:38:33.463429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.463996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.464985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.465952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.466995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.917 [2024-10-08 20:38:33.467829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.467953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.468027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.468089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.468152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.468219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.468278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.917 [2024-10-08 20:38:33.469818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.469878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.469952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.470937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.471949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.472960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.473916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.474944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.475948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.918 [2024-10-08 20:38:33.476533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.476951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 [2024-10-08 20:38:33.477380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.919 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.486 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:05.486 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:05.744 true 00:11:05.744 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:05.744 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.119 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.119 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:07.120 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:07.686 true 00:11:07.686 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:07.686 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.253 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.769 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:08.769 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:09.336 true 00:11:09.336 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:09.336 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.595 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.424 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:10.425 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:10.994 true 00:11:10.994 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:10.994 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.931 Initializing NVMe Controllers 00:11:11.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.931 Controller IO queue size 128, less than required. 00:11:11.931 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.931 Controller IO queue size 128, less than required. 00:11:11.931 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:11.931 Initialization complete. Launching workers. 00:11:11.931 ======================================================== 00:11:11.931 Latency(us) 00:11:11.931 Device Information : IOPS MiB/s Average min max 00:11:11.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4488.27 2.19 19197.06 2213.08 1013371.28 00:11:11.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13296.13 6.49 9626.38 2800.67 542069.34 00:11:11.931 ======================================================== 00:11:11.931 Total : 17784.40 8.68 12041.74 2213.08 1013371.28 00:11:11.931 00:11:11.931 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.869 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:12.870 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:13.129 true 00:11:13.129 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604190 00:11:13.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1604190) - No such process 00:11:13.129 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1604190 00:11:13.129 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.068 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.328 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:14.328 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:14.328 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:14.328 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:14.328 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:15.268 null0 00:11:15.268 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.268 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.268 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:15.528 null1 00:11:15.528 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.528 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.528 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:16.467 null2 00:11:16.467 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:16.467 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:16.467 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:16.728 null3 00:11:16.988 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:16.988 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:16.988 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:17.558 null4 00:11:17.558 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:17.558 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:17.558 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:18.128 null5 00:11:18.128 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.128 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.128 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:18.698 null6 00:11:18.698 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.698 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.698 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:19.266 null7 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.266 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1608632 1608633 1608635 1608637 1608639 1608641 1608643 1608645 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.267 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.526 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.789 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.051 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.324 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.324 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.324 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.324 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.324 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.324 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.583 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.841 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.099 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.099 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.099 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.099 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.099 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.100 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.358 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.358 20:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.358 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.358 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.358 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.358 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.358 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.616 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.874 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.131 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.389 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.389 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.390 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.648 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.907 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:23.165 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.165 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:23.165 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:23.165 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.166 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:23.166 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:23.166 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.424 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.424 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:23.682 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.940 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.201 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:24.460 20:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:24.460 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.460 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:24.460 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:24.461 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:24.461 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:24.461 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.461 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.461 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.719 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.009 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.299 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:25.299 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.300 20:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.300 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.300 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.558 rmmod nvme_tcp 00:11:25.558 rmmod nvme_fabrics 00:11:25.558 rmmod nvme_keyring 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1603637 ']' 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1603637 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1603637 ']' 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1603637 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.558 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603637 00:11:25.818 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:25.818 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:25.818 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603637' 00:11:25.818 killing process with pid 1603637 00:11:25.818 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1603637 00:11:25.818 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1603637 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.078 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.079 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.079 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.618 00:11:28.618 real 0m54.513s 00:11:28.618 user 4m3.740s 00:11:28.618 sys 0m18.518s 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.618 ************************************ 00:11:28.618 END TEST nvmf_ns_hotplug_stress 00:11:28.618 ************************************ 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.618 ************************************ 00:11:28.618 START TEST nvmf_delete_subsystem 00:11:28.618 ************************************ 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:28.618 * Looking for test storage... 00:11:28.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:28.618 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:28.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.618 --rc genhtml_branch_coverage=1 00:11:28.618 --rc genhtml_function_coverage=1 00:11:28.618 --rc genhtml_legend=1 00:11:28.618 --rc geninfo_all_blocks=1 00:11:28.618 --rc geninfo_unexecuted_blocks=1 00:11:28.618 00:11:28.618 ' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:28.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.618 --rc genhtml_branch_coverage=1 00:11:28.618 --rc genhtml_function_coverage=1 00:11:28.618 --rc genhtml_legend=1 00:11:28.618 --rc geninfo_all_blocks=1 00:11:28.618 --rc geninfo_unexecuted_blocks=1 00:11:28.618 00:11:28.618 ' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:28.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.618 --rc genhtml_branch_coverage=1 00:11:28.618 --rc genhtml_function_coverage=1 00:11:28.618 --rc genhtml_legend=1 00:11:28.618 --rc geninfo_all_blocks=1 00:11:28.618 --rc geninfo_unexecuted_blocks=1 00:11:28.618 00:11:28.618 ' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:28.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.618 --rc genhtml_branch_coverage=1 00:11:28.618 --rc genhtml_function_coverage=1 00:11:28.618 --rc genhtml_legend=1 00:11:28.618 --rc geninfo_all_blocks=1 00:11:28.618 --rc geninfo_unexecuted_blocks=1 00:11:28.618 00:11:28.618 ' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.618 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.619 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:31.905 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:31.905 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:31.905 Found net devices under 0000:84:00.0: cvl_0_0 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:31.905 Found net devices under 0000:84:00.1: cvl_0_1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.905 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:11:31.906 00:11:31.906 --- 10.0.0.2 ping statistics --- 00:11:31.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.906 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:11:31.906 00:11:31.906 --- 10.0.0.1 ping statistics --- 00:11:31.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.906 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1611712 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1611712 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1611712 ']' 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.906 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.906 [2024-10-08 20:39:00.348081] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:11:31.906 [2024-10-08 20:39:00.348179] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.906 [2024-10-08 20:39:00.465099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.165 [2024-10-08 20:39:00.703322] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.165 [2024-10-08 20:39:00.703439] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.165 [2024-10-08 20:39:00.703488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.165 [2024-10-08 20:39:00.703530] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.165 [2024-10-08 20:39:00.703569] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.165 [2024-10-08 20:39:00.705352] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.165 [2024-10-08 20:39:00.705368] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.165 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.165 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:11:32.165 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:32.165 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.165 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.423 [2024-10-08 20:39:00.956240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.423 [2024-10-08 20:39:00.980939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.423 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.424 NULL1 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.424 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.424 Delay0 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1611783 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:32.424 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:32.424 [2024-10-08 20:39:01.089247] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:34.326 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.326 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.326 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 starting I/O failed: -6 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 [2024-10-08 20:39:03.193410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f5390 is same with the state(6) to be set 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Write completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.585 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 [2024-10-08 20:39:03.194488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f5750 is same with the state(6) to be set 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 starting I/O failed: -6 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 [2024-10-08 20:39:03.195023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f0000c10 is same with the state(6) to be set 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Write completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:34.586 Read completed with error (sct=0, sc=8) 00:11:35.521 [2024-10-08 20:39:04.153646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f6a70 is same with the state(6) to be set 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 [2024-10-08 20:39:04.196225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f000cff0 is same with the state(6) to be set 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 [2024-10-08 20:39:04.196396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f03f000d650 is same with the state(6) to be set 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 [2024-10-08 20:39:04.196862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f5570 is same with the state(6) to be set 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Read completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.521 Write completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Write completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Write completed with error (sct=0, sc=8) 00:11:35.522 Write completed with error (sct=0, sc=8) 00:11:35.522 Read completed with error (sct=0, sc=8) 00:11:35.522 Write completed with error (sct=0, sc=8) 00:11:35.522 [2024-10-08 20:39:04.198256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f5930 is same with the state(6) to be set 00:11:35.522 Initializing NVMe Controllers 00:11:35.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.522 Controller IO queue size 128, less than required. 00:11:35.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:35.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:35.522 Initialization complete. Launching workers. 00:11:35.522 ======================================================== 00:11:35.522 Latency(us) 00:11:35.522 Device Information : IOPS MiB/s Average min max 00:11:35.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.22 0.08 897170.96 571.91 1013117.77 00:11:35.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.75 0.08 906533.71 378.35 1013508.94 00:11:35.522 ======================================================== 00:11:35.522 Total : 333.97 0.16 901789.73 378.35 1013508.94 00:11:35.522 00:11:35.522 [2024-10-08 20:39:04.199082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f6a70 (9): Bad file descriptor 00:11:35.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:35.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:35.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1611783 00:11:35.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1611783 00:11:36.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1611783) - No such process 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1611783 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1611783 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1611783 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.088 [2024-10-08 20:39:04.730712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1612350 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:36.088 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.088 [2024-10-08 20:39:04.816256] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:36.655 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.655 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:36.655 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:37.223 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:37.223 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:37.223 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:37.789 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:37.789 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:37.789 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.048 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.048 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:38.048 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.615 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.615 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:38.615 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.183 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.183 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:39.183 20:39:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.442 Initializing NVMe Controllers 00:11:39.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.442 Controller IO queue size 128, less than required. 00:11:39.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:39.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:39.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:39.443 Initialization complete. Launching workers. 00:11:39.443 ======================================================== 00:11:39.443 Latency(us) 00:11:39.443 Device Information : IOPS MiB/s Average min max 00:11:39.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004002.67 1000183.51 1040859.26 00:11:39.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005689.48 1000245.36 1014587.48 00:11:39.443 ======================================================== 00:11:39.443 Total : 256.00 0.12 1004846.08 1000183.51 1040859.26 00:11:39.443 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1612350 00:11:39.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1612350) - No such process 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1612350 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.703 rmmod nvme_tcp 00:11:39.703 rmmod nvme_fabrics 00:11:39.703 rmmod nvme_keyring 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1611712 ']' 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1611712 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1611712 ']' 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1611712 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611712 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611712' 00:11:39.703 killing process with pid 1611712 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1611712 00:11:39.703 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1611712 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:40.274 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.275 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.275 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.275 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.275 20:39:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.178 00:11:42.178 real 0m14.006s 00:11:42.178 user 0m29.068s 00:11:42.178 sys 0m3.994s 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.178 ************************************ 00:11:42.178 END TEST nvmf_delete_subsystem 00:11:42.178 ************************************ 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.178 ************************************ 00:11:42.178 START TEST nvmf_host_management 00:11:42.178 ************************************ 00:11:42.178 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:42.437 * Looking for test storage... 00:11:42.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.437 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.437 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.437 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.437 --rc genhtml_branch_coverage=1 00:11:42.437 --rc genhtml_function_coverage=1 00:11:42.437 --rc genhtml_legend=1 00:11:42.437 --rc geninfo_all_blocks=1 00:11:42.437 --rc geninfo_unexecuted_blocks=1 00:11:42.437 00:11:42.437 ' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.437 --rc genhtml_branch_coverage=1 00:11:42.437 --rc genhtml_function_coverage=1 00:11:42.437 --rc genhtml_legend=1 00:11:42.437 --rc geninfo_all_blocks=1 00:11:42.437 --rc geninfo_unexecuted_blocks=1 00:11:42.437 00:11:42.437 ' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.437 --rc genhtml_branch_coverage=1 00:11:42.437 --rc genhtml_function_coverage=1 00:11:42.437 --rc genhtml_legend=1 00:11:42.437 --rc geninfo_all_blocks=1 00:11:42.437 --rc geninfo_unexecuted_blocks=1 00:11:42.437 00:11:42.437 ' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.437 --rc genhtml_branch_coverage=1 00:11:42.437 --rc genhtml_function_coverage=1 00:11:42.437 --rc genhtml_legend=1 00:11:42.437 --rc geninfo_all_blocks=1 00:11:42.437 --rc geninfo_unexecuted_blocks=1 00:11:42.437 00:11:42.437 ' 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.437 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.697 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.698 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:45.234 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:45.234 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:45.234 Found net devices under 0000:84:00.0: cvl_0_0 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:45.234 Found net devices under 0000:84:00.1: cvl_0_1 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.234 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.493 20:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:11:45.493 00:11:45.493 --- 10.0.0.2 ping statistics --- 00:11:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.493 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:11:45.493 00:11:45.493 --- 10.0.0.1 ping statistics --- 00:11:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.493 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1615367 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1615367 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1615367 ']' 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.493 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:45.493 [2024-10-08 20:39:14.234114] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:11:45.493 [2024-10-08 20:39:14.234206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.752 [2024-10-08 20:39:14.345039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.010 [2024-10-08 20:39:14.569180] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.010 [2024-10-08 20:39:14.569308] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.010 [2024-10-08 20:39:14.569345] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.010 [2024-10-08 20:39:14.569382] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.010 [2024-10-08 20:39:14.569394] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.010 [2024-10-08 20:39:14.572709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.010 [2024-10-08 20:39:14.572805] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.010 [2024-10-08 20:39:14.572858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:46.010 [2024-10-08 20:39:14.572862] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.010 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.010 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:46.010 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:46.010 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.010 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 [2024-10-08 20:39:14.782383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 Malloc0 00:11:46.269 [2024-10-08 20:39:14.844618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1615428 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1615428 /var/tmp/bdevperf.sock 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1615428 ']' 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:46.269 { 00:11:46.269 "params": { 00:11:46.269 "name": "Nvme$subsystem", 00:11:46.269 "trtype": "$TEST_TRANSPORT", 00:11:46.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.269 "adrfam": "ipv4", 00:11:46.269 "trsvcid": "$NVMF_PORT", 00:11:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.269 "hdgst": ${hdgst:-false}, 00:11:46.269 "ddgst": ${ddgst:-false} 00:11:46.269 }, 00:11:46.269 "method": "bdev_nvme_attach_controller" 00:11:46.269 } 00:11:46.269 EOF 00:11:46.269 )") 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:11:46.269 20:39:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:46.269 "params": { 00:11:46.269 "name": "Nvme0", 00:11:46.269 "trtype": "tcp", 00:11:46.269 "traddr": "10.0.0.2", 00:11:46.269 "adrfam": "ipv4", 00:11:46.269 "trsvcid": "4420", 00:11:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:46.269 "hdgst": false, 00:11:46.269 "ddgst": false 00:11:46.269 }, 00:11:46.269 "method": "bdev_nvme_attach_controller" 00:11:46.269 }' 00:11:46.269 [2024-10-08 20:39:14.949842] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:11:46.269 [2024-10-08 20:39:14.949932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615428 ] 00:11:46.269 [2024-10-08 20:39:15.029732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.527 [2024-10-08 20:39:15.146058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.785 Running I/O for 10 seconds... 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.785 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.044 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:47.044 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:47.044 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=484 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 484 -ge 100 ']' 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.303 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.303 [2024-10-08 20:39:15.876018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c690 is same with the state(6) to be set 00:11:47.303 [2024-10-08 20:39:15.876149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c690 is same with the state(6) to be set 00:11:47.303 [2024-10-08 20:39:15.879204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.303 [2024-10-08 20:39:15.879247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.303 [2024-10-08 20:39:15.879288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.303 [2024-10-08 20:39:15.879303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.303 [2024-10-08 20:39:15.879318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.303 [2024-10-08 20:39:15.879331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.303 [2024-10-08 20:39:15.879354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.304 [2024-10-08 20:39:15.879367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.879381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0100 is same with the state(6) to be set 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.304 20:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:47.304 [2024-10-08 20:39:15.894017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b0100 (9): Bad file descriptor 00:11:47.304 [2024-10-08 20:39:15.894125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.894969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.894992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.304 [2024-10-08 20:39:15.895261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.304 [2024-10-08 20:39:15.895275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.895979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.895994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.896008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.305 [2024-10-08 20:39:15.896035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.305 [2024-10-08 20:39:15.896125] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9c9270 was disconnected and freed. reset controller. 00:11:47.305 [2024-10-08 20:39:15.897236] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:47.305 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:47.305 00:11:47.305 Latency(us) 00:11:47.305 [2024-10-08T18:39:16.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.305 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:47.305 Job: Nvme0n1 ended in about 0.43 seconds with error 00:11:47.305 Verification LBA range: start 0x0 length 0x400 00:11:47.305 Nvme0n1 : 0.43 1338.62 83.66 148.74 0.00 41832.75 2439.40 34369.99 00:11:47.305 [2024-10-08T18:39:16.068Z] =================================================================================================================== 00:11:47.305 [2024-10-08T18:39:16.068Z] Total : 1338.62 83.66 148.74 0.00 41832.75 2439.40 34369.99 00:11:47.305 [2024-10-08 20:39:15.900161] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:47.305 [2024-10-08 20:39:15.910790] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1615428 00:11:48.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1615428) - No such process 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:48.242 { 00:11:48.242 "params": { 00:11:48.242 "name": "Nvme$subsystem", 00:11:48.242 "trtype": "$TEST_TRANSPORT", 00:11:48.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.242 "adrfam": "ipv4", 00:11:48.242 "trsvcid": "$NVMF_PORT", 00:11:48.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.242 "hdgst": ${hdgst:-false}, 00:11:48.242 "ddgst": ${ddgst:-false} 00:11:48.242 }, 00:11:48.242 "method": "bdev_nvme_attach_controller" 00:11:48.242 } 00:11:48.242 EOF 00:11:48.242 )") 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:11:48.242 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:48.242 "params": { 00:11:48.242 "name": "Nvme0", 00:11:48.242 "trtype": "tcp", 00:11:48.242 "traddr": "10.0.0.2", 00:11:48.242 "adrfam": "ipv4", 00:11:48.242 "trsvcid": "4420", 00:11:48.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:48.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:48.242 "hdgst": false, 00:11:48.242 "ddgst": false 00:11:48.242 }, 00:11:48.242 "method": "bdev_nvme_attach_controller" 00:11:48.242 }' 00:11:48.242 [2024-10-08 20:39:16.946826] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:11:48.242 [2024-10-08 20:39:16.946952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615697 ] 00:11:48.501 [2024-10-08 20:39:17.021342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.501 [2024-10-08 20:39:17.134304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.760 Running I/O for 1 seconds... 00:11:49.695 1536.00 IOPS, 96.00 MiB/s 00:11:49.695 Latency(us) 00:11:49.695 [2024-10-08T18:39:18.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.695 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:49.695 Verification LBA range: start 0x0 length 0x400 00:11:49.695 Nvme0n1 : 1.01 1585.40 99.09 0.00 0.00 39722.40 9514.86 34952.53 00:11:49.695 [2024-10-08T18:39:18.458Z] =================================================================================================================== 00:11:49.695 [2024-10-08T18:39:18.458Z] Total : 1585.40 99.09 0.00 0.00 39722.40 9514.86 34952.53 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.953 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.953 rmmod nvme_tcp 00:11:49.953 rmmod nvme_fabrics 00:11:49.953 rmmod nvme_keyring 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1615367 ']' 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1615367 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1615367 ']' 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1615367 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615367 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615367' 00:11:50.213 killing process with pid 1615367 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1615367 00:11:50.213 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1615367 00:11:50.472 [2024-10-08 20:39:19.170103] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.472 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:53.006 00:11:53.006 real 0m10.377s 00:11:53.006 user 0m22.144s 00:11:53.006 sys 0m3.673s 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.006 ************************************ 00:11:53.006 END TEST nvmf_host_management 00:11:53.006 ************************************ 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.006 ************************************ 00:11:53.006 START TEST nvmf_lvol 00:11:53.006 ************************************ 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.006 * Looking for test storage... 00:11:53.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.006 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.007 --rc genhtml_branch_coverage=1 00:11:53.007 --rc genhtml_function_coverage=1 00:11:53.007 --rc genhtml_legend=1 00:11:53.007 --rc geninfo_all_blocks=1 00:11:53.007 --rc geninfo_unexecuted_blocks=1 00:11:53.007 00:11:53.007 ' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.007 --rc genhtml_branch_coverage=1 00:11:53.007 --rc genhtml_function_coverage=1 00:11:53.007 --rc genhtml_legend=1 00:11:53.007 --rc geninfo_all_blocks=1 00:11:53.007 --rc geninfo_unexecuted_blocks=1 00:11:53.007 00:11:53.007 ' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.007 --rc genhtml_branch_coverage=1 00:11:53.007 --rc genhtml_function_coverage=1 00:11:53.007 --rc genhtml_legend=1 00:11:53.007 --rc geninfo_all_blocks=1 00:11:53.007 --rc geninfo_unexecuted_blocks=1 00:11:53.007 00:11:53.007 ' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.007 --rc genhtml_branch_coverage=1 00:11:53.007 --rc genhtml_function_coverage=1 00:11:53.007 --rc genhtml_legend=1 00:11:53.007 --rc geninfo_all_blocks=1 00:11:53.007 --rc geninfo_unexecuted_blocks=1 00:11:53.007 00:11:53.007 ' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.007 20:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:56.300 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:56.300 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.300 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:56.301 Found net devices under 0000:84:00.0: cvl_0_0 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:56.301 Found net devices under 0000:84:00.1: cvl_0_1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:11:56.301 00:11:56.301 --- 10.0.0.2 ping statistics --- 00:11:56.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.301 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:56.301 00:11:56.301 --- 10.0.0.1 ping statistics --- 00:11:56.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.301 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1618061 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1618061 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1618061 ']' 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:56.301 20:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.301 [2024-10-08 20:39:24.800308] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:11:56.301 [2024-10-08 20:39:24.800394] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.301 [2024-10-08 20:39:24.883822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:56.560 [2024-10-08 20:39:25.099334] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.560 [2024-10-08 20:39:25.099453] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.560 [2024-10-08 20:39:25.099489] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.560 [2024-10-08 20:39:25.099534] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.560 [2024-10-08 20:39:25.099563] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.560 [2024-10-08 20:39:25.101753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.560 [2024-10-08 20:39:25.101859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.560 [2024-10-08 20:39:25.101870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.560 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.560 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:11:56.560 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:56.560 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.560 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.849 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.849 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:57.128 [2024-10-08 20:39:25.686309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.128 20:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.388 20:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:57.388 20:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.955 20:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:57.955 20:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:58.213 20:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:58.781 20:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9f850774-87c0-4c97-a494-b5422130c2e1 00:11:58.781 20:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9f850774-87c0-4c97-a494-b5422130c2e1 lvol 20 00:11:59.040 20:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d1faaeb2-a24c-4465-804a-49a73a091902 00:11:59.040 20:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:59.301 20:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1faaeb2-a24c-4465-804a-49a73a091902 00:12:00.240 20:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:00.240 [2024-10-08 20:39:28.968210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.240 20:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.806 20:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1618621 00:12:00.806 20:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:00.806 20:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:01.743 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d1faaeb2-a24c-4465-804a-49a73a091902 MY_SNAPSHOT 00:12:02.314 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ad178700-4637-44d9-b581-d98d873d2b11 00:12:02.314 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d1faaeb2-a24c-4465-804a-49a73a091902 30 00:12:02.572 20:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad178700-4637-44d9-b581-d98d873d2b11 MY_CLONE 00:12:03.139 20:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f910b7e7-aeb5-45ee-8fdf-f7bad7faef47 00:12:03.139 20:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f910b7e7-aeb5-45ee-8fdf-f7bad7faef47 00:12:04.076 20:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1618621 00:12:12.227 Initializing NVMe Controllers 00:12:12.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:12.227 Controller IO queue size 128, less than required. 00:12:12.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:12.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:12.227 Initialization complete. Launching workers. 00:12:12.227 ======================================================== 00:12:12.227 Latency(us) 00:12:12.227 Device Information : IOPS MiB/s Average min max 00:12:12.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10309.10 40.27 12424.08 2205.37 80435.76 00:12:12.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10196.50 39.83 12553.86 2285.15 79890.62 00:12:12.227 ======================================================== 00:12:12.227 Total : 20505.60 80.10 12488.61 2205.37 80435.76 00:12:12.227 00:12:12.227 20:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d1faaeb2-a24c-4465-804a-49a73a091902 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9f850774-87c0-4c97-a494-b5422130c2e1 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.227 rmmod nvme_tcp 00:12:12.227 rmmod nvme_fabrics 00:12:12.227 rmmod nvme_keyring 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1618061 ']' 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1618061 00:12:12.227 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1618061 ']' 00:12:12.228 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1618061 00:12:12.228 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:12:12.228 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.228 20:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1618061 00:12:12.487 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.487 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.487 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1618061' 00:12:12.487 killing process with pid 1618061 00:12:12.487 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1618061 00:12:12.487 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1618061 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.055 20:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.974 00:12:14.974 real 0m22.252s 00:12:14.974 user 1m13.265s 00:12:14.974 sys 0m6.796s 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 ************************************ 00:12:14.974 END TEST nvmf_lvol 00:12:14.974 ************************************ 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 ************************************ 00:12:14.974 START TEST nvmf_lvs_grow 00:12:14.974 ************************************ 00:12:14.974 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:15.233 * Looking for test storage... 00:12:15.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.233 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.233 --rc genhtml_branch_coverage=1 00:12:15.233 --rc genhtml_function_coverage=1 00:12:15.233 --rc genhtml_legend=1 00:12:15.233 --rc geninfo_all_blocks=1 00:12:15.233 --rc geninfo_unexecuted_blocks=1 00:12:15.233 00:12:15.233 ' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.234 --rc genhtml_branch_coverage=1 00:12:15.234 --rc genhtml_function_coverage=1 00:12:15.234 --rc genhtml_legend=1 00:12:15.234 --rc geninfo_all_blocks=1 00:12:15.234 --rc geninfo_unexecuted_blocks=1 00:12:15.234 00:12:15.234 ' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.234 --rc genhtml_branch_coverage=1 00:12:15.234 --rc genhtml_function_coverage=1 00:12:15.234 --rc genhtml_legend=1 00:12:15.234 --rc geninfo_all_blocks=1 00:12:15.234 --rc geninfo_unexecuted_blocks=1 00:12:15.234 00:12:15.234 ' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.234 --rc genhtml_branch_coverage=1 00:12:15.234 --rc genhtml_function_coverage=1 00:12:15.234 --rc genhtml_legend=1 00:12:15.234 --rc geninfo_all_blocks=1 00:12:15.234 --rc geninfo_unexecuted_blocks=1 00:12:15.234 00:12:15.234 ' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:15.234 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.235 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:18.522 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:18.522 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:18.522 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:18.523 Found net devices under 0000:84:00.0: cvl_0_0 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:18.523 Found net devices under 0000:84:00.1: cvl_0_1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:12:18.523 00:12:18.523 --- 10.0.0.2 ping statistics --- 00:12:18.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.523 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:12:18.523 00:12:18.523 --- 10.0.0.1 ping statistics --- 00:12:18.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.523 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1622042 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1622042 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1622042 ']' 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.523 20:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:18.523 [2024-10-08 20:39:47.092979] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:12:18.523 [2024-10-08 20:39:47.093148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.523 [2024-10-08 20:39:47.254099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.783 [2024-10-08 20:39:47.454442] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.783 [2024-10-08 20:39:47.454558] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.783 [2024-10-08 20:39:47.454593] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.783 [2024-10-08 20:39:47.454624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.783 [2024-10-08 20:39:47.454664] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.783 [2024-10-08 20:39:47.456009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.168 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:20.168 [2024-10-08 20:39:48.906008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 ************************************ 00:12:20.427 START TEST lvs_grow_clean 00:12:20.427 ************************************ 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:20.427 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:20.687 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:20.687 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:21.622 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=26911f8f-6847-4e65-898f-80f9ade83084 00:12:21.623 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:21.623 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:22.190 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:22.190 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:22.190 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 26911f8f-6847-4e65-898f-80f9ade83084 lvol 150 00:12:22.448 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 00:12:22.448 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.448 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:23.015 [2024-10-08 20:39:51.719873] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:23.015 [2024-10-08 20:39:51.720054] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:23.015 true 00:12:23.015 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:23.015 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:23.952 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:23.952 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.519 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 00:12:25.087 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:25.654 [2024-10-08 20:39:54.136093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.654 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1623004 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1623004 /var/tmp/bdevperf.sock 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1623004 ']' 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.220 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:26.220 [2024-10-08 20:39:54.975426] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:12:26.220 [2024-10-08 20:39:54.975592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623004 ] 00:12:26.477 [2024-10-08 20:39:55.122852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.735 [2024-10-08 20:39:55.339350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.301 20:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.301 20:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:12:27.301 20:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:28.237 Nvme0n1 00:12:28.237 20:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:28.496 [ 00:12:28.496 { 00:12:28.496 "name": "Nvme0n1", 00:12:28.496 "aliases": [ 00:12:28.496 "8991f2a7-ac2d-4c5a-af03-a7c68dff72b6" 00:12:28.496 ], 00:12:28.496 "product_name": "NVMe disk", 00:12:28.496 "block_size": 4096, 00:12:28.496 "num_blocks": 38912, 00:12:28.496 "uuid": "8991f2a7-ac2d-4c5a-af03-a7c68dff72b6", 00:12:28.496 "numa_id": 1, 00:12:28.496 "assigned_rate_limits": { 00:12:28.496 "rw_ios_per_sec": 0, 00:12:28.496 "rw_mbytes_per_sec": 0, 00:12:28.496 "r_mbytes_per_sec": 0, 00:12:28.496 "w_mbytes_per_sec": 0 00:12:28.496 }, 00:12:28.496 "claimed": false, 00:12:28.496 "zoned": false, 00:12:28.496 "supported_io_types": { 00:12:28.496 "read": true, 00:12:28.496 "write": true, 00:12:28.496 "unmap": true, 00:12:28.496 "flush": true, 00:12:28.496 "reset": true, 00:12:28.496 "nvme_admin": true, 00:12:28.496 "nvme_io": true, 00:12:28.496 "nvme_io_md": false, 00:12:28.496 "write_zeroes": true, 00:12:28.496 "zcopy": false, 00:12:28.496 "get_zone_info": false, 00:12:28.496 "zone_management": false, 00:12:28.496 "zone_append": false, 00:12:28.496 "compare": true, 00:12:28.496 "compare_and_write": true, 00:12:28.496 "abort": true, 00:12:28.496 "seek_hole": false, 00:12:28.496 "seek_data": false, 00:12:28.496 "copy": true, 00:12:28.496 "nvme_iov_md": false 00:12:28.496 }, 00:12:28.496 "memory_domains": [ 00:12:28.496 { 00:12:28.496 "dma_device_id": "system", 00:12:28.496 "dma_device_type": 1 00:12:28.496 } 00:12:28.496 ], 00:12:28.496 "driver_specific": { 00:12:28.496 "nvme": [ 00:12:28.496 { 00:12:28.496 "trid": { 00:12:28.496 "trtype": "TCP", 00:12:28.496 "adrfam": "IPv4", 00:12:28.496 "traddr": "10.0.0.2", 00:12:28.496 "trsvcid": "4420", 00:12:28.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:28.496 }, 00:12:28.496 "ctrlr_data": { 00:12:28.496 "cntlid": 1, 00:12:28.496 "vendor_id": "0x8086", 00:12:28.496 "model_number": "SPDK bdev Controller", 00:12:28.496 "serial_number": "SPDK0", 00:12:28.496 "firmware_revision": "25.01", 00:12:28.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:28.496 "oacs": { 00:12:28.496 "security": 0, 00:12:28.496 "format": 0, 00:12:28.496 "firmware": 0, 00:12:28.496 "ns_manage": 0 00:12:28.496 }, 00:12:28.496 "multi_ctrlr": true, 00:12:28.496 "ana_reporting": false 00:12:28.496 }, 00:12:28.496 "vs": { 00:12:28.496 "nvme_version": "1.3" 00:12:28.496 }, 00:12:28.496 "ns_data": { 00:12:28.496 "id": 1, 00:12:28.496 "can_share": true 00:12:28.496 } 00:12:28.496 } 00:12:28.496 ], 00:12:28.496 "mp_policy": "active_passive" 00:12:28.496 } 00:12:28.496 } 00:12:28.496 ] 00:12:28.496 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1623270 00:12:28.496 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:28.496 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:28.760 Running I/O for 10 seconds... 00:12:29.731 Latency(us) 00:12:29.731 [2024-10-08T18:39:58.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.732 Nvme0n1 : 1.00 6097.00 23.82 0.00 0.00 0.00 0.00 0.00 00:12:29.732 [2024-10-08T18:39:58.495Z] =================================================================================================================== 00:12:29.732 [2024-10-08T18:39:58.495Z] Total : 6097.00 23.82 0.00 0.00 0.00 0.00 0.00 00:12:29.732 00:12:30.666 20:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:30.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.666 Nvme0n1 : 2.00 7239.50 28.28 0.00 0.00 0.00 0.00 0.00 00:12:30.666 [2024-10-08T18:39:59.429Z] =================================================================================================================== 00:12:30.666 [2024-10-08T18:39:59.429Z] Total : 7239.50 28.28 0.00 0.00 0.00 0.00 0.00 00:12:30.666 00:12:30.926 true 00:12:30.926 20:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:30.926 20:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:31.495 20:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:31.495 20:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:31.495 20:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1623270 00:12:31.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.753 Nvme0n1 : 3.00 7789.67 30.43 0.00 0.00 0.00 0.00 0.00 00:12:31.753 [2024-10-08T18:40:00.516Z] =================================================================================================================== 00:12:31.753 [2024-10-08T18:40:00.516Z] Total : 7789.67 30.43 0.00 0.00 0.00 0.00 0.00 00:12:31.753 00:12:32.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.689 Nvme0n1 : 4.00 7779.00 30.39 0.00 0.00 0.00 0.00 0.00 00:12:32.689 [2024-10-08T18:40:01.452Z] =================================================================================================================== 00:12:32.689 [2024-10-08T18:40:01.452Z] Total : 7779.00 30.39 0.00 0.00 0.00 0.00 0.00 00:12:32.689 00:12:34.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.063 Nvme0n1 : 5.00 7747.20 30.26 0.00 0.00 0.00 0.00 0.00 00:12:34.063 [2024-10-08T18:40:02.826Z] =================================================================================================================== 00:12:34.063 [2024-10-08T18:40:02.826Z] Total : 7747.20 30.26 0.00 0.00 0.00 0.00 0.00 00:12:34.063 00:12:34.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.998 Nvme0n1 : 6.00 7544.00 29.47 0.00 0.00 0.00 0.00 0.00 00:12:34.998 [2024-10-08T18:40:03.761Z] =================================================================================================================== 00:12:34.998 [2024-10-08T18:40:03.761Z] Total : 7544.00 29.47 0.00 0.00 0.00 0.00 0.00 00:12:34.998 00:12:35.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.933 Nvme0n1 : 7.00 7697.29 30.07 0.00 0.00 0.00 0.00 0.00 00:12:35.933 [2024-10-08T18:40:04.696Z] =================================================================================================================== 00:12:35.933 [2024-10-08T18:40:04.696Z] Total : 7697.29 30.07 0.00 0.00 0.00 0.00 0.00 00:12:35.933 00:12:36.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.867 Nvme0n1 : 8.00 7767.00 30.34 0.00 0.00 0.00 0.00 0.00 00:12:36.867 [2024-10-08T18:40:05.630Z] =================================================================================================================== 00:12:36.867 [2024-10-08T18:40:05.630Z] Total : 7767.00 30.34 0.00 0.00 0.00 0.00 0.00 00:12:36.867 00:12:37.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.801 Nvme0n1 : 9.00 7680.11 30.00 0.00 0.00 0.00 0.00 0.00 00:12:37.801 [2024-10-08T18:40:06.564Z] =================================================================================================================== 00:12:37.801 [2024-10-08T18:40:06.564Z] Total : 7680.11 30.00 0.00 0.00 0.00 0.00 0.00 00:12:37.801 00:12:38.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.735 Nvme0n1 : 10.00 7775.70 30.37 0.00 0.00 0.00 0.00 0.00 00:12:38.735 [2024-10-08T18:40:07.498Z] =================================================================================================================== 00:12:38.735 [2024-10-08T18:40:07.498Z] Total : 7775.70 30.37 0.00 0.00 0.00 0.00 0.00 00:12:38.735 00:12:38.735 00:12:38.735 Latency(us) 00:12:38.735 [2024-10-08T18:40:07.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.735 Nvme0n1 : 10.02 7774.97 30.37 0.00 0.00 16452.50 7864.32 39224.51 00:12:38.735 [2024-10-08T18:40:07.498Z] =================================================================================================================== 00:12:38.735 [2024-10-08T18:40:07.498Z] Total : 7774.97 30.37 0.00 0.00 16452.50 7864.32 39224.51 00:12:38.735 { 00:12:38.735 "results": [ 00:12:38.735 { 00:12:38.735 "job": "Nvme0n1", 00:12:38.735 "core_mask": "0x2", 00:12:38.735 "workload": "randwrite", 00:12:38.735 "status": "finished", 00:12:38.735 "queue_depth": 128, 00:12:38.735 "io_size": 4096, 00:12:38.735 "runtime": 10.017407, 00:12:38.735 "iops": 7774.966116481041, 00:12:38.735 "mibps": 30.370961392504068, 00:12:38.735 "io_failed": 0, 00:12:38.735 "io_timeout": 0, 00:12:38.735 "avg_latency_us": 16452.503966046806, 00:12:38.735 "min_latency_us": 7864.32, 00:12:38.735 "max_latency_us": 39224.50962962963 00:12:38.735 } 00:12:38.735 ], 00:12:38.735 "core_count": 1 00:12:38.735 } 00:12:38.735 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1623004 00:12:38.736 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1623004 ']' 00:12:38.736 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1623004 00:12:38.736 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:12:38.736 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.736 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1623004 00:12:38.994 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:38.994 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:38.994 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1623004' 00:12:38.994 killing process with pid 1623004 00:12:38.994 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1623004 00:12:38.994 Received shutdown signal, test time was about 10.000000 seconds 00:12:38.994 00:12:38.994 Latency(us) 00:12:38.994 [2024-10-08T18:40:07.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.994 [2024-10-08T18:40:07.757Z] =================================================================================================================== 00:12:38.994 [2024-10-08T18:40:07.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:38.994 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1623004 00:12:39.252 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.510 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.079 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:40.079 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:40.337 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:40.337 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:40.337 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:40.596 [2024-10-08 20:40:09.329642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:40.854 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:41.111 request: 00:12:41.111 { 00:12:41.111 "uuid": "26911f8f-6847-4e65-898f-80f9ade83084", 00:12:41.111 "method": "bdev_lvol_get_lvstores", 00:12:41.111 "req_id": 1 00:12:41.111 } 00:12:41.111 Got JSON-RPC error response 00:12:41.111 response: 00:12:41.111 { 00:12:41.111 "code": -19, 00:12:41.111 "message": "No such device" 00:12:41.111 } 00:12:41.111 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:41.111 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:41.111 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:41.111 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:41.111 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:41.370 aio_bdev 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:41.370 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:41.936 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 -t 2000 00:12:42.505 [ 00:12:42.505 { 00:12:42.505 "name": "8991f2a7-ac2d-4c5a-af03-a7c68dff72b6", 00:12:42.505 "aliases": [ 00:12:42.505 "lvs/lvol" 00:12:42.505 ], 00:12:42.505 "product_name": "Logical Volume", 00:12:42.505 "block_size": 4096, 00:12:42.505 "num_blocks": 38912, 00:12:42.505 "uuid": "8991f2a7-ac2d-4c5a-af03-a7c68dff72b6", 00:12:42.505 "assigned_rate_limits": { 00:12:42.505 "rw_ios_per_sec": 0, 00:12:42.505 "rw_mbytes_per_sec": 0, 00:12:42.505 "r_mbytes_per_sec": 0, 00:12:42.505 "w_mbytes_per_sec": 0 00:12:42.505 }, 00:12:42.505 "claimed": false, 00:12:42.505 "zoned": false, 00:12:42.505 "supported_io_types": { 00:12:42.505 "read": true, 00:12:42.505 "write": true, 00:12:42.505 "unmap": true, 00:12:42.505 "flush": false, 00:12:42.505 "reset": true, 00:12:42.505 "nvme_admin": false, 00:12:42.505 "nvme_io": false, 00:12:42.505 "nvme_io_md": false, 00:12:42.505 "write_zeroes": true, 00:12:42.505 "zcopy": false, 00:12:42.505 "get_zone_info": false, 00:12:42.505 "zone_management": false, 00:12:42.505 "zone_append": false, 00:12:42.505 "compare": false, 00:12:42.505 "compare_and_write": false, 00:12:42.505 "abort": false, 00:12:42.505 "seek_hole": true, 00:12:42.505 "seek_data": true, 00:12:42.505 "copy": false, 00:12:42.505 "nvme_iov_md": false 00:12:42.505 }, 00:12:42.505 "driver_specific": { 00:12:42.505 "lvol": { 00:12:42.505 "lvol_store_uuid": "26911f8f-6847-4e65-898f-80f9ade83084", 00:12:42.505 "base_bdev": "aio_bdev", 00:12:42.505 "thin_provision": false, 00:12:42.505 "num_allocated_clusters": 38, 00:12:42.505 "snapshot": false, 00:12:42.505 "clone": false, 00:12:42.505 "esnap_clone": false 00:12:42.505 } 00:12:42.505 } 00:12:42.505 } 00:12:42.505 ] 00:12:42.505 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:12:42.506 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:42.506 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:43.075 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:43.075 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:43.075 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:43.642 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:43.642 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8991f2a7-ac2d-4c5a-af03-a7c68dff72b6 00:12:44.208 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26911f8f-6847-4e65-898f-80f9ade83084 00:12:45.142 20:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:45.711 00:12:45.711 real 0m25.346s 00:12:45.711 user 0m25.163s 00:12:45.711 sys 0m2.788s 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 ************************************ 00:12:45.711 END TEST lvs_grow_clean 00:12:45.711 ************************************ 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 ************************************ 00:12:45.711 START TEST lvs_grow_dirty 00:12:45.711 ************************************ 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:45.711 20:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:46.645 20:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:46.645 20:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:47.213 20:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:12:47.213 20:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:12:47.213 20:40:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:47.781 20:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:47.781 20:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:47.781 20:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f6ab57a5-0275-4fe6-9963-a014de6b68fc lvol 150 00:12:48.717 20:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aafd0e34-0632-4d22-8197-e2e97627b54d 00:12:48.717 20:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:48.717 20:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:49.087 [2024-10-08 20:40:17.823827] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:49.087 [2024-10-08 20:40:17.824006] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:49.087 true 00:12:49.344 20:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:12:49.344 20:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:49.602 20:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:49.602 20:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:49.859 20:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aafd0e34-0632-4d22-8197-e2e97627b54d 00:12:50.426 20:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:50.993 [2024-10-08 20:40:19.706230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.993 20:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1625964 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1625964 /var/tmp/bdevperf.sock 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1625964 ']' 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.561 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.561 [2024-10-08 20:40:20.210846] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:12:51.561 [2024-10-08 20:40:20.211027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625964 ] 00:12:51.820 [2024-10-08 20:40:20.362519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.078 [2024-10-08 20:40:20.584404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.078 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.078 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:52.078 20:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:53.014 Nvme0n1 00:12:53.015 20:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:53.579 [ 00:12:53.579 { 00:12:53.579 "name": "Nvme0n1", 00:12:53.579 "aliases": [ 00:12:53.579 "aafd0e34-0632-4d22-8197-e2e97627b54d" 00:12:53.579 ], 00:12:53.579 "product_name": "NVMe disk", 00:12:53.579 "block_size": 4096, 00:12:53.579 "num_blocks": 38912, 00:12:53.579 "uuid": "aafd0e34-0632-4d22-8197-e2e97627b54d", 00:12:53.579 "numa_id": 1, 00:12:53.579 "assigned_rate_limits": { 00:12:53.579 "rw_ios_per_sec": 0, 00:12:53.579 "rw_mbytes_per_sec": 0, 00:12:53.579 "r_mbytes_per_sec": 0, 00:12:53.579 "w_mbytes_per_sec": 0 00:12:53.579 }, 00:12:53.579 "claimed": false, 00:12:53.579 "zoned": false, 00:12:53.579 "supported_io_types": { 00:12:53.579 "read": true, 00:12:53.579 "write": true, 00:12:53.579 "unmap": true, 00:12:53.579 "flush": true, 00:12:53.579 "reset": true, 00:12:53.579 "nvme_admin": true, 00:12:53.579 "nvme_io": true, 00:12:53.579 "nvme_io_md": false, 00:12:53.579 "write_zeroes": true, 00:12:53.579 "zcopy": false, 00:12:53.579 "get_zone_info": false, 00:12:53.579 "zone_management": false, 00:12:53.579 "zone_append": false, 00:12:53.579 "compare": true, 00:12:53.579 "compare_and_write": true, 00:12:53.579 "abort": true, 00:12:53.579 "seek_hole": false, 00:12:53.579 "seek_data": false, 00:12:53.579 "copy": true, 00:12:53.579 "nvme_iov_md": false 00:12:53.579 }, 00:12:53.579 "memory_domains": [ 00:12:53.579 { 00:12:53.579 "dma_device_id": "system", 00:12:53.579 "dma_device_type": 1 00:12:53.579 } 00:12:53.579 ], 00:12:53.579 "driver_specific": { 00:12:53.579 "nvme": [ 00:12:53.579 { 00:12:53.579 "trid": { 00:12:53.579 "trtype": "TCP", 00:12:53.579 "adrfam": "IPv4", 00:12:53.579 "traddr": "10.0.0.2", 00:12:53.579 "trsvcid": "4420", 00:12:53.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:53.579 }, 00:12:53.579 "ctrlr_data": { 00:12:53.579 "cntlid": 1, 00:12:53.579 "vendor_id": "0x8086", 00:12:53.579 "model_number": "SPDK bdev Controller", 00:12:53.579 "serial_number": "SPDK0", 00:12:53.579 "firmware_revision": "25.01", 00:12:53.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:53.579 "oacs": { 00:12:53.579 "security": 0, 00:12:53.579 "format": 0, 00:12:53.579 "firmware": 0, 00:12:53.579 "ns_manage": 0 00:12:53.579 }, 00:12:53.579 "multi_ctrlr": true, 00:12:53.579 "ana_reporting": false 00:12:53.579 }, 00:12:53.579 "vs": { 00:12:53.579 "nvme_version": "1.3" 00:12:53.579 }, 00:12:53.579 "ns_data": { 00:12:53.579 "id": 1, 00:12:53.579 "can_share": true 00:12:53.579 } 00:12:53.579 } 00:12:53.579 ], 00:12:53.579 "mp_policy": "active_passive" 00:12:53.579 } 00:12:53.579 } 00:12:53.579 ] 00:12:53.579 20:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1626128 00:12:53.579 20:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:53.579 20:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:53.839 Running I/O for 10 seconds... 00:12:54.776 Latency(us) 00:12:54.776 [2024-10-08T18:40:23.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.776 Nvme0n1 : 1.00 6478.00 25.30 0.00 0.00 0.00 0.00 0.00 00:12:54.776 [2024-10-08T18:40:23.539Z] =================================================================================================================== 00:12:54.776 [2024-10-08T18:40:23.539Z] Total : 6478.00 25.30 0.00 0.00 0.00 0.00 0.00 00:12:54.776 00:12:55.711 20:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:12:55.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.711 Nvme0n1 : 2.00 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:12:55.711 [2024-10-08T18:40:24.474Z] =================================================================================================================== 00:12:55.711 [2024-10-08T18:40:24.474Z] Total : 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:12:55.711 00:12:56.277 true 00:12:56.277 20:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:12:56.277 20:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:56.536 20:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:56.536 20:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:56.536 20:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1626128 00:12:56.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.796 Nvme0n1 : 3.00 6456.33 25.22 0.00 0.00 0.00 0.00 0.00 00:12:56.796 [2024-10-08T18:40:25.559Z] =================================================================================================================== 00:12:56.796 [2024-10-08T18:40:25.559Z] Total : 6456.33 25.22 0.00 0.00 0.00 0.00 0.00 00:12:56.796 00:12:57.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.734 Nvme0n1 : 4.00 6445.50 25.18 0.00 0.00 0.00 0.00 0.00 00:12:57.734 [2024-10-08T18:40:26.497Z] =================================================================================================================== 00:12:57.734 [2024-10-08T18:40:26.497Z] Total : 6445.50 25.18 0.00 0.00 0.00 0.00 0.00 00:12:57.734 00:12:58.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.670 Nvme0n1 : 5.00 6451.80 25.20 0.00 0.00 0.00 0.00 0.00 00:12:58.670 [2024-10-08T18:40:27.433Z] =================================================================================================================== 00:12:58.670 [2024-10-08T18:40:27.433Z] Total : 6451.80 25.20 0.00 0.00 0.00 0.00 0.00 00:12:58.670 00:13:00.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.050 Nvme0n1 : 6.00 6434.83 25.14 0.00 0.00 0.00 0.00 0.00 00:13:00.050 [2024-10-08T18:40:28.813Z] =================================================================================================================== 00:13:00.050 [2024-10-08T18:40:28.813Z] Total : 6434.83 25.14 0.00 0.00 0.00 0.00 0.00 00:13:00.050 00:13:00.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.988 Nvme0n1 : 7.00 6431.86 25.12 0.00 0.00 0.00 0.00 0.00 00:13:00.988 [2024-10-08T18:40:29.751Z] =================================================================================================================== 00:13:00.988 [2024-10-08T18:40:29.751Z] Total : 6431.86 25.12 0.00 0.00 0.00 0.00 0.00 00:13:00.988 00:13:01.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.925 Nvme0n1 : 8.00 6445.38 25.18 0.00 0.00 0.00 0.00 0.00 00:13:01.925 [2024-10-08T18:40:30.688Z] =================================================================================================================== 00:13:01.925 [2024-10-08T18:40:30.688Z] Total : 6445.38 25.18 0.00 0.00 0.00 0.00 0.00 00:13:01.925 00:13:02.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.862 Nvme0n1 : 9.00 6448.89 25.19 0.00 0.00 0.00 0.00 0.00 00:13:02.862 [2024-10-08T18:40:31.626Z] =================================================================================================================== 00:13:02.863 [2024-10-08T18:40:31.626Z] Total : 6448.89 25.19 0.00 0.00 0.00 0.00 0.00 00:13:02.863 00:13:03.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.800 Nvme0n1 : 10.00 6464.40 25.25 0.00 0.00 0.00 0.00 0.00 00:13:03.800 [2024-10-08T18:40:32.563Z] =================================================================================================================== 00:13:03.800 [2024-10-08T18:40:32.563Z] Total : 6464.40 25.25 0.00 0.00 0.00 0.00 0.00 00:13:03.800 00:13:03.800 00:13:03.800 Latency(us) 00:13:03.800 [2024-10-08T18:40:32.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.800 Nvme0n1 : 10.01 6462.06 25.24 0.00 0.00 19787.06 7281.78 39030.33 00:13:03.800 [2024-10-08T18:40:32.563Z] =================================================================================================================== 00:13:03.800 [2024-10-08T18:40:32.563Z] Total : 6462.06 25.24 0.00 0.00 19787.06 7281.78 39030.33 00:13:03.800 { 00:13:03.800 "results": [ 00:13:03.800 { 00:13:03.800 "job": "Nvme0n1", 00:13:03.800 "core_mask": "0x2", 00:13:03.800 "workload": "randwrite", 00:13:03.800 "status": "finished", 00:13:03.800 "queue_depth": 128, 00:13:03.800 "io_size": 4096, 00:13:03.800 "runtime": 10.013683, 00:13:03.800 "iops": 6462.057966085006, 00:13:03.800 "mibps": 25.242413930019556, 00:13:03.800 "io_failed": 0, 00:13:03.800 "io_timeout": 0, 00:13:03.800 "avg_latency_us": 19787.062568959722, 00:13:03.800 "min_latency_us": 7281.777777777777, 00:13:03.800 "max_latency_us": 39030.328888888886 00:13:03.800 } 00:13:03.800 ], 00:13:03.800 "core_count": 1 00:13:03.800 } 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1625964 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1625964 ']' 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1625964 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625964 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625964' 00:13:03.800 killing process with pid 1625964 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1625964 00:13:03.800 Received shutdown signal, test time was about 10.000000 seconds 00:13:03.800 00:13:03.800 Latency(us) 00:13:03.800 [2024-10-08T18:40:32.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.800 [2024-10-08T18:40:32.563Z] =================================================================================================================== 00:13:03.800 [2024-10-08T18:40:32.563Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.800 20:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1625964 00:13:04.367 20:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.938 20:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:05.875 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:05.875 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1622042 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1622042 00:13:06.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1622042 Killed "${NVMF_APP[@]}" "$@" 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1627590 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1627590 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1627590 ']' 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.134 20:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.134 [2024-10-08 20:40:34.853959] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:06.134 [2024-10-08 20:40:34.854134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.392 [2024-10-08 20:40:35.012478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.651 [2024-10-08 20:40:35.237224] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.651 [2024-10-08 20:40:35.237279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.651 [2024-10-08 20:40:35.237297] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.651 [2024-10-08 20:40:35.237311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.651 [2024-10-08 20:40:35.237323] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.651 [2024-10-08 20:40:35.238066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.910 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:07.169 [2024-10-08 20:40:35.785943] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:07.169 [2024-10-08 20:40:35.786332] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:07.169 [2024-10-08 20:40:35.786464] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aafd0e34-0632-4d22-8197-e2e97627b54d 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=aafd0e34-0632-4d22-8197-e2e97627b54d 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.169 20:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:08.107 20:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aafd0e34-0632-4d22-8197-e2e97627b54d -t 2000 00:13:08.677 [ 00:13:08.677 { 00:13:08.677 "name": "aafd0e34-0632-4d22-8197-e2e97627b54d", 00:13:08.677 "aliases": [ 00:13:08.677 "lvs/lvol" 00:13:08.677 ], 00:13:08.677 "product_name": "Logical Volume", 00:13:08.677 "block_size": 4096, 00:13:08.677 "num_blocks": 38912, 00:13:08.677 "uuid": "aafd0e34-0632-4d22-8197-e2e97627b54d", 00:13:08.677 "assigned_rate_limits": { 00:13:08.677 "rw_ios_per_sec": 0, 00:13:08.677 "rw_mbytes_per_sec": 0, 00:13:08.677 "r_mbytes_per_sec": 0, 00:13:08.677 "w_mbytes_per_sec": 0 00:13:08.677 }, 00:13:08.677 "claimed": false, 00:13:08.677 "zoned": false, 00:13:08.677 "supported_io_types": { 00:13:08.677 "read": true, 00:13:08.677 "write": true, 00:13:08.677 "unmap": true, 00:13:08.677 "flush": false, 00:13:08.677 "reset": true, 00:13:08.677 "nvme_admin": false, 00:13:08.677 "nvme_io": false, 00:13:08.677 "nvme_io_md": false, 00:13:08.677 "write_zeroes": true, 00:13:08.677 "zcopy": false, 00:13:08.677 "get_zone_info": false, 00:13:08.677 "zone_management": false, 00:13:08.677 "zone_append": false, 00:13:08.677 "compare": false, 00:13:08.677 "compare_and_write": false, 00:13:08.677 "abort": false, 00:13:08.677 "seek_hole": true, 00:13:08.677 "seek_data": true, 00:13:08.677 "copy": false, 00:13:08.677 "nvme_iov_md": false 00:13:08.677 }, 00:13:08.677 "driver_specific": { 00:13:08.677 "lvol": { 00:13:08.677 "lvol_store_uuid": "f6ab57a5-0275-4fe6-9963-a014de6b68fc", 00:13:08.677 "base_bdev": "aio_bdev", 00:13:08.677 "thin_provision": false, 00:13:08.677 "num_allocated_clusters": 38, 00:13:08.677 "snapshot": false, 00:13:08.677 "clone": false, 00:13:08.677 "esnap_clone": false 00:13:08.677 } 00:13:08.677 } 00:13:08.677 } 00:13:08.677 ] 00:13:08.677 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:08.677 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:08.677 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:08.936 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:08.936 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:08.936 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:09.195 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:09.195 20:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:10.138 [2024-10-08 20:40:38.570604] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:10.138 20:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:10.707 request: 00:13:10.707 { 00:13:10.707 "uuid": "f6ab57a5-0275-4fe6-9963-a014de6b68fc", 00:13:10.707 "method": "bdev_lvol_get_lvstores", 00:13:10.707 "req_id": 1 00:13:10.707 } 00:13:10.707 Got JSON-RPC error response 00:13:10.707 response: 00:13:10.707 { 00:13:10.707 "code": -19, 00:13:10.707 "message": "No such device" 00:13:10.707 } 00:13:10.707 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:10.707 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:10.707 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:10.707 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:10.707 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.965 aio_bdev 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aafd0e34-0632-4d22-8197-e2e97627b54d 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=aafd0e34-0632-4d22-8197-e2e97627b54d 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:10.965 20:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:11.532 20:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aafd0e34-0632-4d22-8197-e2e97627b54d -t 2000 00:13:11.791 [ 00:13:11.791 { 00:13:11.791 "name": "aafd0e34-0632-4d22-8197-e2e97627b54d", 00:13:11.791 "aliases": [ 00:13:11.791 "lvs/lvol" 00:13:11.791 ], 00:13:11.791 "product_name": "Logical Volume", 00:13:11.791 "block_size": 4096, 00:13:11.791 "num_blocks": 38912, 00:13:11.791 "uuid": "aafd0e34-0632-4d22-8197-e2e97627b54d", 00:13:11.791 "assigned_rate_limits": { 00:13:11.791 "rw_ios_per_sec": 0, 00:13:11.791 "rw_mbytes_per_sec": 0, 00:13:11.791 "r_mbytes_per_sec": 0, 00:13:11.791 "w_mbytes_per_sec": 0 00:13:11.791 }, 00:13:11.791 "claimed": false, 00:13:11.791 "zoned": false, 00:13:11.791 "supported_io_types": { 00:13:11.791 "read": true, 00:13:11.791 "write": true, 00:13:11.791 "unmap": true, 00:13:11.791 "flush": false, 00:13:11.791 "reset": true, 00:13:11.791 "nvme_admin": false, 00:13:11.791 "nvme_io": false, 00:13:11.791 "nvme_io_md": false, 00:13:11.791 "write_zeroes": true, 00:13:11.791 "zcopy": false, 00:13:11.791 "get_zone_info": false, 00:13:11.791 "zone_management": false, 00:13:11.791 "zone_append": false, 00:13:11.791 "compare": false, 00:13:11.791 "compare_and_write": false, 00:13:11.791 "abort": false, 00:13:11.791 "seek_hole": true, 00:13:11.791 "seek_data": true, 00:13:11.791 "copy": false, 00:13:11.791 "nvme_iov_md": false 00:13:11.791 }, 00:13:11.791 "driver_specific": { 00:13:11.791 "lvol": { 00:13:11.791 "lvol_store_uuid": "f6ab57a5-0275-4fe6-9963-a014de6b68fc", 00:13:11.791 "base_bdev": "aio_bdev", 00:13:11.791 "thin_provision": false, 00:13:11.791 "num_allocated_clusters": 38, 00:13:11.791 "snapshot": false, 00:13:11.791 "clone": false, 00:13:11.791 "esnap_clone": false 00:13:11.791 } 00:13:11.791 } 00:13:11.791 } 00:13:11.791 ] 00:13:11.791 20:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:11.791 20:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:11.791 20:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:12.728 20:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:12.728 20:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:12.728 20:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:13.296 20:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:13.296 20:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aafd0e34-0632-4d22-8197-e2e97627b54d 00:13:13.555 20:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6ab57a5-0275-4fe6-9963-a014de6b68fc 00:13:14.123 20:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:14.690 00:13:14.690 real 0m28.986s 00:13:14.690 user 1m12.806s 00:13:14.690 sys 0m6.200s 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:14.690 ************************************ 00:13:14.690 END TEST lvs_grow_dirty 00:13:14.690 ************************************ 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:14.690 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:14.690 nvmf_trace.0 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.949 rmmod nvme_tcp 00:13:14.949 rmmod nvme_fabrics 00:13:14.949 rmmod nvme_keyring 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1627590 ']' 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1627590 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1627590 ']' 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1627590 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1627590 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1627590' 00:13:14.949 killing process with pid 1627590 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1627590 00:13:14.949 20:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1627590 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.516 20:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.439 00:13:17.439 real 1m2.433s 00:13:17.439 user 1m49.252s 00:13:17.439 sys 0m12.040s 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:17.439 ************************************ 00:13:17.439 END TEST nvmf_lvs_grow 00:13:17.439 ************************************ 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:17.439 ************************************ 00:13:17.439 START TEST nvmf_bdev_io_wait 00:13:17.439 ************************************ 00:13:17.439 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:17.699 * Looking for test storage... 00:13:17.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.699 --rc genhtml_branch_coverage=1 00:13:17.699 --rc genhtml_function_coverage=1 00:13:17.699 --rc genhtml_legend=1 00:13:17.699 --rc geninfo_all_blocks=1 00:13:17.699 --rc geninfo_unexecuted_blocks=1 00:13:17.699 00:13:17.699 ' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.699 --rc genhtml_branch_coverage=1 00:13:17.699 --rc genhtml_function_coverage=1 00:13:17.699 --rc genhtml_legend=1 00:13:17.699 --rc geninfo_all_blocks=1 00:13:17.699 --rc geninfo_unexecuted_blocks=1 00:13:17.699 00:13:17.699 ' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.699 --rc genhtml_branch_coverage=1 00:13:17.699 --rc genhtml_function_coverage=1 00:13:17.699 --rc genhtml_legend=1 00:13:17.699 --rc geninfo_all_blocks=1 00:13:17.699 --rc geninfo_unexecuted_blocks=1 00:13:17.699 00:13:17.699 ' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.699 --rc genhtml_branch_coverage=1 00:13:17.699 --rc genhtml_function_coverage=1 00:13:17.699 --rc genhtml_legend=1 00:13:17.699 --rc geninfo_all_blocks=1 00:13:17.699 --rc geninfo_unexecuted_blocks=1 00:13:17.699 00:13:17.699 ' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.699 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.700 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.700 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.960 20:40:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:21.255 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:21.255 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:21.255 Found net devices under 0000:84:00.0: cvl_0_0 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:21.255 Found net devices under 0000:84:00.1: cvl_0_1 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.255 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:13:21.256 00:13:21.256 --- 10.0.0.2 ping statistics --- 00:13:21.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.256 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:21.256 00:13:21.256 --- 10.0.0.1 ping statistics --- 00:13:21.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.256 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1630790 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1630790 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1630790 ']' 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.256 20:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.256 [2024-10-08 20:40:49.785030] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:21.256 [2024-10-08 20:40:49.785197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.256 [2024-10-08 20:40:49.949064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.517 [2024-10-08 20:40:50.169568] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.517 [2024-10-08 20:40:50.169712] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.517 [2024-10-08 20:40:50.169752] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.517 [2024-10-08 20:40:50.169783] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.517 [2024-10-08 20:40:50.169809] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.517 [2024-10-08 20:40:50.173401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.517 [2024-10-08 20:40:50.173496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.517 [2024-10-08 20:40:50.173591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.517 [2024-10-08 20:40:50.173596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.777 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.777 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:13:21.777 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:21.777 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.777 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.778 [2024-10-08 20:40:50.489074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.778 Malloc0 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.778 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.037 [2024-10-08 20:40:50.556003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1630825 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:22.037 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1630827 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:22.038 { 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme$subsystem", 00:13:22.038 "trtype": "$TEST_TRANSPORT", 00:13:22.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "$NVMF_PORT", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.038 "hdgst": ${hdgst:-false}, 00:13:22.038 "ddgst": ${ddgst:-false} 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 } 00:13:22.038 EOF 00:13:22.038 )") 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1630829 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:22.038 { 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme$subsystem", 00:13:22.038 "trtype": "$TEST_TRANSPORT", 00:13:22.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "$NVMF_PORT", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.038 "hdgst": ${hdgst:-false}, 00:13:22.038 "ddgst": ${ddgst:-false} 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 } 00:13:22.038 EOF 00:13:22.038 )") 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1630832 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:22.038 { 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme$subsystem", 00:13:22.038 "trtype": "$TEST_TRANSPORT", 00:13:22.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "$NVMF_PORT", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.038 "hdgst": ${hdgst:-false}, 00:13:22.038 "ddgst": ${ddgst:-false} 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 } 00:13:22.038 EOF 00:13:22.038 )") 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:22.038 { 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme$subsystem", 00:13:22.038 "trtype": "$TEST_TRANSPORT", 00:13:22.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "$NVMF_PORT", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.038 "hdgst": ${hdgst:-false}, 00:13:22.038 "ddgst": ${ddgst:-false} 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 } 00:13:22.038 EOF 00:13:22.038 )") 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1630825 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme1", 00:13:22.038 "trtype": "tcp", 00:13:22.038 "traddr": "10.0.0.2", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "4420", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.038 "hdgst": false, 00:13:22.038 "ddgst": false 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 }' 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme1", 00:13:22.038 "trtype": "tcp", 00:13:22.038 "traddr": "10.0.0.2", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "4420", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.038 "hdgst": false, 00:13:22.038 "ddgst": false 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 }' 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme1", 00:13:22.038 "trtype": "tcp", 00:13:22.038 "traddr": "10.0.0.2", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "4420", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.038 "hdgst": false, 00:13:22.038 "ddgst": false 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 }' 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:22.038 20:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:22.038 "params": { 00:13:22.038 "name": "Nvme1", 00:13:22.038 "trtype": "tcp", 00:13:22.038 "traddr": "10.0.0.2", 00:13:22.038 "adrfam": "ipv4", 00:13:22.038 "trsvcid": "4420", 00:13:22.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.038 "hdgst": false, 00:13:22.038 "ddgst": false 00:13:22.038 }, 00:13:22.038 "method": "bdev_nvme_attach_controller" 00:13:22.038 }' 00:13:22.038 [2024-10-08 20:40:50.608286] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:22.038 [2024-10-08 20:40:50.608354] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:22.038 [2024-10-08 20:40:50.611089] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:22.038 [2024-10-08 20:40:50.611086] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:22.038 [2024-10-08 20:40:50.611175] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 20:40:50.611176] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:22.038 --proc-type=auto ] 00:13:22.038 [2024-10-08 20:40:50.612887] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:22.038 [2024-10-08 20:40:50.612962] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:22.038 [2024-10-08 20:40:50.759595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.297 [2024-10-08 20:40:50.853424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:13:22.297 [2024-10-08 20:40:50.865302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.297 [2024-10-08 20:40:50.971843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:13:22.297 [2024-10-08 20:40:51.006529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.555 [2024-10-08 20:40:51.106508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:13:22.555 [2024-10-08 20:40:51.116068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.555 [2024-10-08 20:40:51.212127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:13:22.813 Running I/O for 1 seconds... 00:13:22.813 Running I/O for 1 seconds... 00:13:23.071 Running I/O for 1 seconds... 00:13:23.071 Running I/O for 1 seconds... 00:13:24.005 12811.00 IOPS, 50.04 MiB/s 00:13:24.005 Latency(us) 00:13:24.005 [2024-10-08T18:40:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.005 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:24.005 Nvme1n1 : 1.01 12872.43 50.28 0.00 0.00 9911.78 4271.98 17864.63 00:13:24.005 [2024-10-08T18:40:52.768Z] =================================================================================================================== 00:13:24.005 [2024-10-08T18:40:52.768Z] Total : 12872.43 50.28 0.00 0.00 9911.78 4271.98 17864.63 00:13:24.005 5302.00 IOPS, 20.71 MiB/s 00:13:24.005 Latency(us) 00:13:24.005 [2024-10-08T18:40:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.005 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:24.005 Nvme1n1 : 1.02 5328.46 20.81 0.00 0.00 23710.75 7233.23 37088.52 00:13:24.005 [2024-10-08T18:40:52.768Z] =================================================================================================================== 00:13:24.005 [2024-10-08T18:40:52.768Z] Total : 5328.46 20.81 0.00 0.00 23710.75 7233.23 37088.52 00:13:24.005 199680.00 IOPS, 780.00 MiB/s 00:13:24.005 Latency(us) 00:13:24.005 [2024-10-08T18:40:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.005 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:24.005 Nvme1n1 : 1.00 199301.75 778.52 0.00 0.00 638.80 304.92 1868.99 00:13:24.005 [2024-10-08T18:40:52.768Z] =================================================================================================================== 00:13:24.005 [2024-10-08T18:40:52.768Z] Total : 199301.75 778.52 0.00 0.00 638.80 304.92 1868.99 00:13:24.005 5621.00 IOPS, 21.96 MiB/s 00:13:24.005 Latency(us) 00:13:24.005 [2024-10-08T18:40:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.005 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:24.005 Nvme1n1 : 1.01 5700.67 22.27 0.00 0.00 22350.95 7621.59 55924.05 00:13:24.005 [2024-10-08T18:40:52.768Z] =================================================================================================================== 00:13:24.005 [2024-10-08T18:40:52.768Z] Total : 5700.67 22.27 0.00 0.00 22350.95 7621.59 55924.05 00:13:24.005 20:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1630827 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1630829 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1630832 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.573 rmmod nvme_tcp 00:13:24.573 rmmod nvme_fabrics 00:13:24.573 rmmod nvme_keyring 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1630790 ']' 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1630790 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1630790 ']' 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1630790 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630790 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630790' 00:13:24.573 killing process with pid 1630790 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1630790 00:13:24.573 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1630790 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.832 20:40:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.369 00:13:27.369 real 0m9.472s 00:13:27.369 user 0m20.579s 00:13:27.369 sys 0m4.738s 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 ************************************ 00:13:27.369 END TEST nvmf_bdev_io_wait 00:13:27.369 ************************************ 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 ************************************ 00:13:27.369 START TEST nvmf_queue_depth 00:13:27.369 ************************************ 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:27.369 * Looking for test storage... 00:13:27.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.369 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:27.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.370 --rc genhtml_branch_coverage=1 00:13:27.370 --rc genhtml_function_coverage=1 00:13:27.370 --rc genhtml_legend=1 00:13:27.370 --rc geninfo_all_blocks=1 00:13:27.370 --rc geninfo_unexecuted_blocks=1 00:13:27.370 00:13:27.370 ' 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:27.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.370 --rc genhtml_branch_coverage=1 00:13:27.370 --rc genhtml_function_coverage=1 00:13:27.370 --rc genhtml_legend=1 00:13:27.370 --rc geninfo_all_blocks=1 00:13:27.370 --rc geninfo_unexecuted_blocks=1 00:13:27.370 00:13:27.370 ' 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:27.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.370 --rc genhtml_branch_coverage=1 00:13:27.370 --rc genhtml_function_coverage=1 00:13:27.370 --rc genhtml_legend=1 00:13:27.370 --rc geninfo_all_blocks=1 00:13:27.370 --rc geninfo_unexecuted_blocks=1 00:13:27.370 00:13:27.370 ' 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:27.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.370 --rc genhtml_branch_coverage=1 00:13:27.370 --rc genhtml_function_coverage=1 00:13:27.370 --rc genhtml_legend=1 00:13:27.370 --rc geninfo_all_blocks=1 00:13:27.370 --rc geninfo_unexecuted_blocks=1 00:13:27.370 00:13:27.370 ' 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.370 20:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.370 20:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:30.662 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:30.662 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:30.662 Found net devices under 0000:84:00.0: cvl_0_0 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.662 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:30.663 Found net devices under 0000:84:00.1: cvl_0_1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:13:30.663 00:13:30.663 --- 10.0.0.2 ping statistics --- 00:13:30.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.663 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:30.663 00:13:30.663 --- 10.0.0.1 ping statistics --- 00:13:30.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.663 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1633328 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1633328 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1633328 ']' 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.663 20:40:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:30.663 [2024-10-08 20:40:59.320737] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:30.663 [2024-10-08 20:40:59.320846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.923 [2024-10-08 20:40:59.448635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.923 [2024-10-08 20:40:59.663134] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.923 [2024-10-08 20:40:59.663268] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.923 [2024-10-08 20:40:59.663324] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.923 [2024-10-08 20:40:59.663372] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.923 [2024-10-08 20:40:59.663412] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.923 [2024-10-08 20:40:59.664892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.860 [2024-10-08 20:41:00.561735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.860 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.118 Malloc0 00:13:32.118 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.118 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.118 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.119 [2024-10-08 20:41:00.647095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1633488 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1633488 /var/tmp/bdevperf.sock 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1633488 ']' 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.119 20:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.119 [2024-10-08 20:41:00.744147] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:32.119 [2024-10-08 20:41:00.744301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633488 ] 00:13:32.377 [2024-10-08 20:41:00.890277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.377 [2024-10-08 20:41:01.106394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:33.753 NVMe0n1 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.753 20:41:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:34.013 Running I/O for 10 seconds... 00:13:35.911 3072.00 IOPS, 12.00 MiB/s [2024-10-08T18:41:05.721Z] 3534.50 IOPS, 13.81 MiB/s [2024-10-08T18:41:06.674Z] 3414.33 IOPS, 13.34 MiB/s [2024-10-08T18:41:07.607Z] 3579.25 IOPS, 13.98 MiB/s [2024-10-08T18:41:08.982Z] 3531.80 IOPS, 13.80 MiB/s [2024-10-08T18:41:09.915Z] 3582.50 IOPS, 13.99 MiB/s [2024-10-08T18:41:10.851Z] 3566.71 IOPS, 13.93 MiB/s [2024-10-08T18:41:11.788Z] 3582.88 IOPS, 14.00 MiB/s [2024-10-08T18:41:12.725Z] 3634.78 IOPS, 14.20 MiB/s [2024-10-08T18:41:12.725Z] 3602.60 IOPS, 14.07 MiB/s 00:13:43.962 Latency(us) 00:13:43.962 [2024-10-08T18:41:12.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.962 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:43.962 Verification LBA range: start 0x0 length 0x4000 00:13:43.962 NVMe0n1 : 10.16 3645.26 14.24 0.00 0.00 278090.94 21651.15 166995.44 00:13:43.962 [2024-10-08T18:41:12.725Z] =================================================================================================================== 00:13:43.962 [2024-10-08T18:41:12.725Z] Total : 3645.26 14.24 0.00 0.00 278090.94 21651.15 166995.44 00:13:44.221 { 00:13:44.221 "results": [ 00:13:44.221 { 00:13:44.221 "job": "NVMe0n1", 00:13:44.221 "core_mask": "0x1", 00:13:44.221 "workload": "verify", 00:13:44.221 "status": "finished", 00:13:44.221 "verify_range": { 00:13:44.221 "start": 0, 00:13:44.221 "length": 16384 00:13:44.221 }, 00:13:44.221 "queue_depth": 1024, 00:13:44.221 "io_size": 4096, 00:13:44.221 "runtime": 10.15539, 00:13:44.221 "iops": 3645.2563614002024, 00:13:44.221 "mibps": 14.23928266171954, 00:13:44.221 "io_failed": 0, 00:13:44.221 "io_timeout": 0, 00:13:44.221 "avg_latency_us": 278090.94318277, 00:13:44.221 "min_latency_us": 21651.152592592593, 00:13:44.221 "max_latency_us": 166995.43703703705 00:13:44.221 } 00:13:44.221 ], 00:13:44.221 "core_count": 1 00:13:44.221 } 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1633488 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1633488 ']' 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1633488 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1633488 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1633488' 00:13:44.221 killing process with pid 1633488 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1633488 00:13:44.221 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.221 00:13:44.221 Latency(us) 00:13:44.221 [2024-10-08T18:41:12.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.221 [2024-10-08T18:41:12.984Z] =================================================================================================================== 00:13:44.221 [2024-10-08T18:41:12.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.221 20:41:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1633488 00:13:44.478 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:44.478 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.479 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:44.479 rmmod nvme_tcp 00:13:44.736 rmmod nvme_fabrics 00:13:44.736 rmmod nvme_keyring 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1633328 ']' 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1633328 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1633328 ']' 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1633328 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1633328 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1633328' 00:13:44.736 killing process with pid 1633328 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1633328 00:13:44.736 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1633328 00:13:45.302 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:45.302 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:45.302 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:45.302 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.303 20:41:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.227 00:13:47.227 real 0m20.216s 00:13:47.227 user 0m27.996s 00:13:47.227 sys 0m4.612s 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:47.227 ************************************ 00:13:47.227 END TEST nvmf_queue_depth 00:13:47.227 ************************************ 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:47.227 ************************************ 00:13:47.227 START TEST nvmf_target_multipath 00:13:47.227 ************************************ 00:13:47.227 20:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:47.486 * Looking for test storage... 00:13:47.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.486 --rc genhtml_branch_coverage=1 00:13:47.486 --rc genhtml_function_coverage=1 00:13:47.486 --rc genhtml_legend=1 00:13:47.486 --rc geninfo_all_blocks=1 00:13:47.486 --rc geninfo_unexecuted_blocks=1 00:13:47.486 00:13:47.486 ' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.486 --rc genhtml_branch_coverage=1 00:13:47.486 --rc genhtml_function_coverage=1 00:13:47.486 --rc genhtml_legend=1 00:13:47.486 --rc geninfo_all_blocks=1 00:13:47.486 --rc geninfo_unexecuted_blocks=1 00:13:47.486 00:13:47.486 ' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.486 --rc genhtml_branch_coverage=1 00:13:47.486 --rc genhtml_function_coverage=1 00:13:47.486 --rc genhtml_legend=1 00:13:47.486 --rc geninfo_all_blocks=1 00:13:47.486 --rc geninfo_unexecuted_blocks=1 00:13:47.486 00:13:47.486 ' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.486 --rc genhtml_branch_coverage=1 00:13:47.486 --rc genhtml_function_coverage=1 00:13:47.486 --rc genhtml_legend=1 00:13:47.486 --rc geninfo_all_blocks=1 00:13:47.486 --rc geninfo_unexecuted_blocks=1 00:13:47.486 00:13:47.486 ' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.486 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.487 20:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:50.779 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:50.779 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.779 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:50.780 Found net devices under 0000:84:00.0: cvl_0_0 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:50.780 Found net devices under 0000:84:00.1: cvl_0_1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.780 20:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:13:50.780 00:13:50.780 --- 10.0.0.2 ping statistics --- 00:13:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.780 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:50.780 00:13:50.780 --- 10.0.0.1 ping statistics --- 00:13:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.780 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:50.780 only one NIC for nvmf test 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.780 rmmod nvme_tcp 00:13:50.780 rmmod nvme_fabrics 00:13:50.780 rmmod nvme_keyring 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.780 20:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.697 00:13:52.697 real 0m5.262s 00:13:52.697 user 0m1.040s 00:13:52.697 sys 0m2.241s 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:52.697 ************************************ 00:13:52.697 END TEST nvmf_target_multipath 00:13:52.697 ************************************ 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:52.697 ************************************ 00:13:52.697 START TEST nvmf_zcopy 00:13:52.697 ************************************ 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.697 * Looking for test storage... 00:13:52.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:52.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.697 --rc genhtml_branch_coverage=1 00:13:52.697 --rc genhtml_function_coverage=1 00:13:52.697 --rc genhtml_legend=1 00:13:52.697 --rc geninfo_all_blocks=1 00:13:52.697 --rc geninfo_unexecuted_blocks=1 00:13:52.697 00:13:52.697 ' 00:13:52.697 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:52.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.698 --rc genhtml_branch_coverage=1 00:13:52.698 --rc genhtml_function_coverage=1 00:13:52.698 --rc genhtml_legend=1 00:13:52.698 --rc geninfo_all_blocks=1 00:13:52.698 --rc geninfo_unexecuted_blocks=1 00:13:52.698 00:13:52.698 ' 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:52.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.698 --rc genhtml_branch_coverage=1 00:13:52.698 --rc genhtml_function_coverage=1 00:13:52.698 --rc genhtml_legend=1 00:13:52.698 --rc geninfo_all_blocks=1 00:13:52.698 --rc geninfo_unexecuted_blocks=1 00:13:52.698 00:13:52.698 ' 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:52.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.698 --rc genhtml_branch_coverage=1 00:13:52.698 --rc genhtml_function_coverage=1 00:13:52.698 --rc genhtml_legend=1 00:13:52.698 --rc geninfo_all_blocks=1 00:13:52.698 --rc geninfo_unexecuted_blocks=1 00:13:52.698 00:13:52.698 ' 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.698 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.956 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.957 20:41:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.494 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:55.495 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.495 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:55.756 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:55.756 Found net devices under 0000:84:00.0: cvl_0_0 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:55.756 Found net devices under 0000:84:00.1: cvl_0_1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:13:55.756 00:13:55.756 --- 10.0.0.2 ping statistics --- 00:13:55.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.756 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:13:55.756 00:13:55.756 --- 10.0.0.1 ping statistics --- 00:13:55.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.756 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1638995 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1638995 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1638995 ']' 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.756 20:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:56.017 [2024-10-08 20:41:24.554623] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:56.017 [2024-10-08 20:41:24.554802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.017 [2024-10-08 20:41:24.671391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.276 [2024-10-08 20:41:24.823085] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.276 [2024-10-08 20:41:24.823183] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.276 [2024-10-08 20:41:24.823220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.276 [2024-10-08 20:41:24.823251] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.276 [2024-10-08 20:41:24.823276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.276 [2024-10-08 20:41:24.824416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.216 [2024-10-08 20:41:25.922117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.216 [2024-10-08 20:41:25.938339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.216 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.476 malloc0 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:57.476 { 00:13:57.476 "params": { 00:13:57.476 "name": "Nvme$subsystem", 00:13:57.476 "trtype": "$TEST_TRANSPORT", 00:13:57.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.476 "adrfam": "ipv4", 00:13:57.476 "trsvcid": "$NVMF_PORT", 00:13:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.476 "hdgst": ${hdgst:-false}, 00:13:57.476 "ddgst": ${ddgst:-false} 00:13:57.476 }, 00:13:57.476 "method": "bdev_nvme_attach_controller" 00:13:57.476 } 00:13:57.476 EOF 00:13:57.476 )") 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:13:57.476 20:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:57.476 "params": { 00:13:57.476 "name": "Nvme1", 00:13:57.476 "trtype": "tcp", 00:13:57.476 "traddr": "10.0.0.2", 00:13:57.476 "adrfam": "ipv4", 00:13:57.476 "trsvcid": "4420", 00:13:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.476 "hdgst": false, 00:13:57.476 "ddgst": false 00:13:57.476 }, 00:13:57.476 "method": "bdev_nvme_attach_controller" 00:13:57.476 }' 00:13:57.476 [2024-10-08 20:41:26.044161] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:13:57.476 [2024-10-08 20:41:26.044251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639150 ] 00:13:57.476 [2024-10-08 20:41:26.110241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.737 [2024-10-08 20:41:26.323416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.996 Running I/O for 10 seconds... 00:14:00.310 3316.00 IOPS, 25.91 MiB/s [2024-10-08T18:41:30.008Z] 3165.00 IOPS, 24.73 MiB/s [2024-10-08T18:41:30.944Z] 3105.00 IOPS, 24.26 MiB/s [2024-10-08T18:41:31.882Z] 3178.00 IOPS, 24.83 MiB/s [2024-10-08T18:41:32.820Z] 3036.60 IOPS, 23.72 MiB/s [2024-10-08T18:41:33.760Z] 2998.00 IOPS, 23.42 MiB/s [2024-10-08T18:41:34.703Z] 3024.71 IOPS, 23.63 MiB/s [2024-10-08T18:41:35.677Z] 3076.75 IOPS, 24.04 MiB/s [2024-10-08T18:41:37.056Z] 3011.44 IOPS, 23.53 MiB/s [2024-10-08T18:41:37.056Z] 2976.40 IOPS, 23.25 MiB/s 00:14:08.293 Latency(us) 00:14:08.293 [2024-10-08T18:41:37.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.293 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:08.293 Verification LBA range: start 0x0 length 0x1000 00:14:08.293 Nvme1n1 : 10.01 2980.15 23.28 0.00 0.00 42817.75 725.14 69905.07 00:14:08.293 [2024-10-08T18:41:37.056Z] =================================================================================================================== 00:14:08.293 [2024-10-08T18:41:37.056Z] Total : 2980.15 23.28 0.00 0.00 42817.75 725.14 69905.07 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1640471 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:14:08.293 { 00:14:08.293 "params": { 00:14:08.293 "name": "Nvme$subsystem", 00:14:08.293 "trtype": "$TEST_TRANSPORT", 00:14:08.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.293 "adrfam": "ipv4", 00:14:08.293 "trsvcid": "$NVMF_PORT", 00:14:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.293 "hdgst": ${hdgst:-false}, 00:14:08.293 "ddgst": ${ddgst:-false} 00:14:08.293 }, 00:14:08.293 "method": "bdev_nvme_attach_controller" 00:14:08.293 } 00:14:08.293 EOF 00:14:08.293 )") 00:14:08.293 [2024-10-08 20:41:36.967473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:36.967585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:14:08.293 20:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:14:08.293 "params": { 00:14:08.293 "name": "Nvme1", 00:14:08.293 "trtype": "tcp", 00:14:08.293 "traddr": "10.0.0.2", 00:14:08.293 "adrfam": "ipv4", 00:14:08.293 "trsvcid": "4420", 00:14:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.293 "hdgst": false, 00:14:08.293 "ddgst": false 00:14:08.293 }, 00:14:08.293 "method": "bdev_nvme_attach_controller" 00:14:08.293 }' 00:14:08.293 [2024-10-08 20:41:36.979319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:36.979354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:36.987334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:36.987365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:36.995358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:36.995389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:37.003378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:37.003409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:37.015421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:37.015454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:37.023434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:37.023465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.293 [2024-10-08 20:41:37.026410] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:14:08.293 [2024-10-08 20:41:37.026498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640471 ] 00:14:08.293 [2024-10-08 20:41:37.031457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.293 [2024-10-08 20:41:37.031488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.294 [2024-10-08 20:41:37.039480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.294 [2024-10-08 20:41:37.039510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.294 [2024-10-08 20:41:37.047504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.294 [2024-10-08 20:41:37.047534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.294 [2024-10-08 20:41:37.055529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.294 [2024-10-08 20:41:37.055559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.553 [2024-10-08 20:41:37.063550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.063580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.071572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.071603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.083709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.083735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.095747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.095780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.107768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.107795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.119760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.119786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.128176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.554 [2024-10-08 20:41:37.131773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.131799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.143822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.143860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.155833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.155864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.167857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.167884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.179998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.180055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.191952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.192011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.204015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.204073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.216096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.216151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.228152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.228210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.240236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.240307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.252220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.252280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.264258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.264313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.276298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.276353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.288338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.288393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.300373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.300428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.554 [2024-10-08 20:41:37.312422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.554 [2024-10-08 20:41:37.312480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.324344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.324411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.325711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.813 [2024-10-08 20:41:37.336487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.336542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.348561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.348629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.360602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.360694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.372639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.372737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.384702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.384766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.396721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.396764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.408756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.408805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.420769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.420813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.432767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.432793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.444856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.444900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.456819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.456862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.468856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.468898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.480852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.480877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.813 [2024-10-08 20:41:37.492883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.813 [2024-10-08 20:41:37.492909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.504930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.504961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.517047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.517112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.529100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.529165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.541125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.541200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.553182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.553239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.814 [2024-10-08 20:41:37.565231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.814 [2024-10-08 20:41:37.565286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.577299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.577363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.589321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.589384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.601363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.601419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.613405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.613461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.625453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.625511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.637497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.637558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.649554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.649624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 [2024-10-08 20:41:37.661595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.661679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.072 Running I/O for 5 seconds... 00:14:09.072 [2024-10-08 20:41:37.682483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.072 [2024-10-08 20:41:37.682555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.703803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.703836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.725837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.725870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.745974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.746023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.768742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.768775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.790070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.790144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.811616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.811713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.073 [2024-10-08 20:41:37.832569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.073 [2024-10-08 20:41:37.832627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.852635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.852737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.873931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.874025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.895828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.895860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.917755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.917788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.938342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.938418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.953212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.953245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.965307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.965338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.976998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.977030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:37.988866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:37.988898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.000853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.000885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.012979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.013011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.024829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.024861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.037391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.037417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.047033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.047059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.057613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.057664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.068512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.068538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.079411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.079437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.331 [2024-10-08 20:41:38.091737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.331 [2024-10-08 20:41:38.091766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.101894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.101921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.112767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.112794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.123430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.123456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.133939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.133967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.144297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.144324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.154626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.154676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.165132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.165158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.175486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.175512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.186090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.186116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.198293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.198319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.207844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.207872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.220326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.220351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.230192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.230218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.240905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.240947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.594 [2024-10-08 20:41:38.251331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.594 [2024-10-08 20:41:38.251358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.262391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.262465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.277058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.277129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.295754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.295781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.314081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.314153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.333828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.333855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.595 [2024-10-08 20:41:38.352969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.595 [2024-10-08 20:41:38.352998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.371308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.371381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.392291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.392362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.413111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.413182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.434597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.434708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.455342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.455415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.476527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.476599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.497832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.497864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.518907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.518941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.539186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.539260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.561526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.561597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.583721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.583753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.852 [2024-10-08 20:41:38.605450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.852 [2024-10-08 20:41:38.605523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.620782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.620813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.632522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.632553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.650850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.650882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 7953.00 IOPS, 62.13 MiB/s [2024-10-08T18:41:38.874Z] [2024-10-08 20:41:38.672107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.672189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.693088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.693168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.714282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.714369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.735760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.735793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.752407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.752480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.774733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.774766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.791398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.791473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.812258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.812331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.833535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.833607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.111 [2024-10-08 20:41:38.854234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.111 [2024-10-08 20:41:38.854306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.873978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.874065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.894888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.894941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.916507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.916580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.937267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.937337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.957851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.957883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:38.979213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:38.979284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.000524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.000596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.021871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.021904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.043096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.043168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.060872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.060905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.081483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.081554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.102313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.102400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.370 [2024-10-08 20:41:39.123141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.370 [2024-10-08 20:41:39.123213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.629 [2024-10-08 20:41:39.142944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.629 [2024-10-08 20:41:39.143017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.629 [2024-10-08 20:41:39.164046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.629 [2024-10-08 20:41:39.164117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.629 [2024-10-08 20:41:39.185222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.629 [2024-10-08 20:41:39.185294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.629 [2024-10-08 20:41:39.207209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.629 [2024-10-08 20:41:39.207280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.229295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.229368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.250724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.250757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.272226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.272303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.289343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.289417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.309940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.310024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.329911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.329962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.350997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.351070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.630 [2024-10-08 20:41:39.372353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.630 [2024-10-08 20:41:39.372424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.393568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.393638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.415571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.415642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.437106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.437176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.458210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.458281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.480052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.480124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.501865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.501906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.523910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.523966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.545733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.545766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.567594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.567697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.584749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.584781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.605779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.605811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.627337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.627409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.889 [2024-10-08 20:41:39.648530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.889 [2024-10-08 20:41:39.648601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.669130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.669202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 6988.50 IOPS, 54.60 MiB/s [2024-10-08T18:41:39.911Z] [2024-10-08 20:41:39.690360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.690431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.711614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.711711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.732759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.732791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.754250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.754321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.770463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.770533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.791060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.791141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.811900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.811932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.833824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.833857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.148 [2024-10-08 20:41:39.849735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.148 [2024-10-08 20:41:39.849807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.149 [2024-10-08 20:41:39.871909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.149 [2024-10-08 20:41:39.871943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.149 [2024-10-08 20:41:39.893781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.149 [2024-10-08 20:41:39.893813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.149 [2024-10-08 20:41:39.909548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.149 [2024-10-08 20:41:39.909618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:39.929712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:39.929744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:39.950607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:39.950711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:39.971742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:39.971774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:39.992815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:39.992847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.009455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.009491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.030048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.030138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.049124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.049162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.070786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.070859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.091144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.091215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.112804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.112878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.136077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.136147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.408 [2024-10-08 20:41:40.161455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.408 [2024-10-08 20:41:40.161529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.184940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.184992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.208374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.208446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.231588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.231677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.255134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.255205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.279202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.279274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.302984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.303066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.326279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.326352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.350006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.350078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.373561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.373632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.396610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.396697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.668 [2024-10-08 20:41:40.418928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.668 [2024-10-08 20:41:40.418961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.442904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.442943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.466564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.466635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.489430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.489500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.512967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.513039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.537846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.537915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.561917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.561959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.584924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.584997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.608530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.608599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.630713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.630746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.641390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.641422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.652586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.652618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.663937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.663970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 6628.67 IOPS, 51.79 MiB/s [2024-10-08T18:41:40.690Z] [2024-10-08 20:41:40.675607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.675638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.927 [2024-10-08 20:41:40.686998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.927 [2024-10-08 20:41:40.687041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.699665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.699692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.709959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.709987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.720604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.720630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.733242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.733268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.743334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.743360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.754188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.754214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.766808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.766837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.776934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.776962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.787895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.787925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.799004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.799044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.810090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.810117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.820999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.821026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.832023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.832050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.844225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.844260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.853976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.854002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.866391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.866417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.877103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.877129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.887463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.887497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.898113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.898139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.908791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.908819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.921376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.921402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.931667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.931694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.185 [2024-10-08 20:41:40.942292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.185 [2024-10-08 20:41:40.942318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:40.958952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:40.959028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:40.983009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:40.983081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.007814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.007886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.032694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.032764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.057406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.057477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.082363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.082434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.107714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.107784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.132313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.132383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.155614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.155703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.179642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.179732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.445 [2024-10-08 20:41:41.203746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.445 [2024-10-08 20:41:41.203784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.227301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.227372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.251963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.252034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.277144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.277229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.301917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.301990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.325854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.325926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.348188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.348260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.368080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.368163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.388253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.388324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.407757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.407789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.426899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.426932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.704 [2024-10-08 20:41:41.447732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.704 [2024-10-08 20:41:41.447764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.963 [2024-10-08 20:41:41.468832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.963 [2024-10-08 20:41:41.468864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.963 [2024-10-08 20:41:41.489337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.963 [2024-10-08 20:41:41.489409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.963 [2024-10-08 20:41:41.509718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.509751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.529747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.529780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.549543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.549614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.569395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.569467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.590587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.590675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.607580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.607669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.627861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.627894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.648770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.648803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.669080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.669167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 6815.00 IOPS, 53.24 MiB/s [2024-10-08T18:41:41.727Z] [2024-10-08 20:41:41.690820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.690853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.964 [2024-10-08 20:41:41.711559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.964 [2024-10-08 20:41:41.711629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.732538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.732609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.747430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.747501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.768053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.768124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.789741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.789773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.809950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.810021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.830787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.830819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.850768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.850800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.871495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.871566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.893537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.893608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.914101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.914173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.930911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.930943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.951164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.951197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.970035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.970068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.222 [2024-10-08 20:41:41.981254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.222 [2024-10-08 20:41:41.981287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:41.993196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:41.993227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.005131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.005163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.017090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.017123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.029104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.029136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.040806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.040838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.052775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.052807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.065157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.065189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.075992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.076034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.089261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.089289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.099435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.479 [2024-10-08 20:41:42.099463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.479 [2024-10-08 20:41:42.110237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.110265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.123105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.123133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.133460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.133487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.144315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.144342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.156894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.156923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.166670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.166699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.177216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.177243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.188303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.188330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.200823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.200852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.210950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.210978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.221615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.221665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.480 [2024-10-08 20:41:42.232355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.480 [2024-10-08 20:41:42.232381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.738 [2024-10-08 20:41:42.243611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.738 [2024-10-08 20:41:42.243661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.738 [2024-10-08 20:41:42.255990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.738 [2024-10-08 20:41:42.256031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.738 [2024-10-08 20:41:42.266733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.266762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.277586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.277613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.294897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.294926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.313219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.313304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.332011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.332095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.350317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.350390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.368710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.368738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.386229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.386311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.405531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.405603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.427887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.427955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.447983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.448054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.468270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.468342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.739 [2024-10-08 20:41:42.487867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.739 [2024-10-08 20:41:42.487900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.507967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.508041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.528508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.528580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.548882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.548914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.568779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.568811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.589237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.589311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.609483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.609556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.628855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.628888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.649156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.649228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.670464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.670536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 7033.00 IOPS, 54.95 MiB/s [2024-10-08T18:41:42.760Z] [2024-10-08 20:41:42.688893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.688925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 00:14:13.997 Latency(us) 00:14:13.997 [2024-10-08T18:41:42.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.997 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:13.997 Nvme1n1 : 5.01 7037.85 54.98 0.00 0.00 18150.00 5024.43 37476.88 00:14:13.997 [2024-10-08T18:41:42.760Z] =================================================================================================================== 00:14:13.997 [2024-10-08T18:41:42.760Z] Total : 7037.85 54.98 0.00 0.00 18150.00 5024.43 37476.88 00:14:13.997 [2024-10-08 20:41:42.695144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.695210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.703189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.703256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.711202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.711264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.719112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.719139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.727205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.727256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.735228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.735284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.743242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.743296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.751256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.751309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.997 [2024-10-08 20:41:42.759284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.997 [2024-10-08 20:41:42.759351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.767306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.767356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.775336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.775391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.787516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.787611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.795489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.795571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.803442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.803501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.815543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.815615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.823470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.823522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.835703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.835762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.843525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.843577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.851675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.851743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.859574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.859624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.867669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.867727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.875699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.875725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.883713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.883739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.891723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.891749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.903745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.903770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.911750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.911775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.919720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.919750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.927892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.928013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.935878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.935950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.943813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.943843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.951805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.951831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.959827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.959852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.967850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.967875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.975872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.975897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.983895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.983944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.991914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:42.991961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:42.999983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:43.000026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:43.008041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:43.008096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.256 [2024-10-08 20:41:43.016043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.256 [2024-10-08 20:41:43.016095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.516 [2024-10-08 20:41:43.028150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.516 [2024-10-08 20:41:43.028206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.516 [2024-10-08 20:41:43.036153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.516 [2024-10-08 20:41:43.036208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.516 [2024-10-08 20:41:43.044177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.516 [2024-10-08 20:41:43.044232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.516 [2024-10-08 20:41:43.052206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.516 [2024-10-08 20:41:43.052261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1640471) - No such process 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1640471 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.516 delay0 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:14.516 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.517 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.517 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.517 20:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:14.517 [2024-10-08 20:41:43.146795] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:21.100 Initializing NVMe Controllers 00:14:21.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.100 Initialization complete. Launching workers. 00:14:21.100 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 61 00:14:21.100 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 348, failed to submit 33 00:14:21.100 success 158, unsuccessful 190, failed 0 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.100 rmmod nvme_tcp 00:14:21.100 rmmod nvme_fabrics 00:14:21.100 rmmod nvme_keyring 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1638995 ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1638995 ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638995' 00:14:21.100 killing process with pid 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1638995 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:21.100 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.101 20:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.643 00:14:23.643 real 0m30.588s 00:14:23.643 user 0m43.252s 00:14:23.643 sys 0m9.840s 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:23.643 ************************************ 00:14:23.643 END TEST nvmf_zcopy 00:14:23.643 ************************************ 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.643 ************************************ 00:14:23.643 START TEST nvmf_nmic 00:14:23.643 ************************************ 00:14:23.643 20:41:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:23.643 * Looking for test storage... 00:14:23.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.643 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.644 --rc genhtml_branch_coverage=1 00:14:23.644 --rc genhtml_function_coverage=1 00:14:23.644 --rc genhtml_legend=1 00:14:23.644 --rc geninfo_all_blocks=1 00:14:23.644 --rc geninfo_unexecuted_blocks=1 00:14:23.644 00:14:23.644 ' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.644 --rc genhtml_branch_coverage=1 00:14:23.644 --rc genhtml_function_coverage=1 00:14:23.644 --rc genhtml_legend=1 00:14:23.644 --rc geninfo_all_blocks=1 00:14:23.644 --rc geninfo_unexecuted_blocks=1 00:14:23.644 00:14:23.644 ' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.644 --rc genhtml_branch_coverage=1 00:14:23.644 --rc genhtml_function_coverage=1 00:14:23.644 --rc genhtml_legend=1 00:14:23.644 --rc geninfo_all_blocks=1 00:14:23.644 --rc geninfo_unexecuted_blocks=1 00:14:23.644 00:14:23.644 ' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.644 --rc genhtml_branch_coverage=1 00:14:23.644 --rc genhtml_function_coverage=1 00:14:23.644 --rc genhtml_legend=1 00:14:23.644 --rc geninfo_all_blocks=1 00:14:23.644 --rc geninfo_unexecuted_blocks=1 00:14:23.644 00:14:23.644 ' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.644 20:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:26.932 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:26.932 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:26.932 Found net devices under 0000:84:00.0: cvl_0_0 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:26.932 Found net devices under 0000:84:00.1: cvl_0_1 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.932 20:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:14:26.932 00:14:26.932 --- 10.0.0.2 ping statistics --- 00:14:26.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.932 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:14:26.932 00:14:26.932 --- 10.0.0.1 ping statistics --- 00:14:26.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.932 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:26.932 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1644003 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1644003 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1644003 ']' 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.933 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.933 [2024-10-08 20:41:55.285221] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:14:26.933 [2024-10-08 20:41:55.285403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.933 [2024-10-08 20:41:55.446477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.933 [2024-10-08 20:41:55.672274] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.933 [2024-10-08 20:41:55.672378] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.933 [2024-10-08 20:41:55.672436] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.933 [2024-10-08 20:41:55.672484] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.933 [2024-10-08 20:41:55.672528] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.933 [2024-10-08 20:41:55.674715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.933 [2024-10-08 20:41:55.674747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.933 [2024-10-08 20:41:55.674800] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.933 [2024-10-08 20:41:55.674803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 [2024-10-08 20:41:55.856702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 Malloc0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 [2024-10-08 20:41:55.911031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:27.192 test case1: single bdev can't be used in multiple subsystems 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 [2024-10-08 20:41:55.934829] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:27.192 [2024-10-08 20:41:55.934866] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:27.192 [2024-10-08 20:41:55.934892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.192 request: 00:14:27.192 { 00:14:27.192 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:27.192 "namespace": { 00:14:27.192 "bdev_name": "Malloc0", 00:14:27.192 "no_auto_visible": false 00:14:27.192 }, 00:14:27.192 "method": "nvmf_subsystem_add_ns", 00:14:27.192 "req_id": 1 00:14:27.192 } 00:14:27.192 Got JSON-RPC error response 00:14:27.192 response: 00:14:27.192 { 00:14:27.192 "code": -32602, 00:14:27.192 "message": "Invalid parameters" 00:14:27.192 } 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:27.192 Adding namespace failed - expected result. 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:27.192 test case2: host connect to nvmf target in multiple paths 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 [2024-10-08 20:41:55.942976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 20:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.125 20:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:28.690 20:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.690 20:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:28.690 20:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.690 20:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:28.690 20:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:30.587 20:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:30.587 [global] 00:14:30.587 thread=1 00:14:30.587 invalidate=1 00:14:30.587 rw=write 00:14:30.587 time_based=1 00:14:30.587 runtime=1 00:14:30.587 ioengine=libaio 00:14:30.587 direct=1 00:14:30.587 bs=4096 00:14:30.587 iodepth=1 00:14:30.587 norandommap=0 00:14:30.587 numjobs=1 00:14:30.587 00:14:30.587 verify_dump=1 00:14:30.587 verify_backlog=512 00:14:30.587 verify_state_save=0 00:14:30.587 do_verify=1 00:14:30.587 verify=crc32c-intel 00:14:30.587 [job0] 00:14:30.587 filename=/dev/nvme0n1 00:14:30.587 Could not set queue depth (nvme0n1) 00:14:30.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:30.844 fio-3.35 00:14:30.844 Starting 1 thread 00:14:32.217 00:14:32.217 job0: (groupid=0, jobs=1): err= 0: pid=1644523: Tue Oct 8 20:42:00 2024 00:14:32.217 read: IOPS=147, BW=590KiB/s (604kB/s)(608KiB/1031msec) 00:14:32.217 slat (nsec): min=7600, max=43694, avg=11458.25, stdev=5725.50 00:14:32.217 clat (usec): min=249, max=41244, avg=6209.49, stdev=14354.79 00:14:32.217 lat (usec): min=257, max=41252, avg=6220.95, stdev=14357.39 00:14:32.217 clat percentiles (usec): 00:14:32.217 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:14:32.217 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 351], 00:14:32.217 | 70.00th=[ 400], 80.00th=[ 429], 90.00th=[41157], 95.00th=[41157], 00:14:32.217 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:32.217 | 99.99th=[41157] 00:14:32.217 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:14:32.217 slat (nsec): min=6959, max=50022, avg=9660.28, stdev=4637.10 00:14:32.217 clat (usec): min=123, max=248, avg=151.37, stdev=16.54 00:14:32.217 lat (usec): min=131, max=289, avg=161.03, stdev=18.33 00:14:32.217 clat percentiles (usec): 00:14:32.217 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:14:32.217 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:14:32.217 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 182], 00:14:32.217 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 249], 99.95th=[ 249], 00:14:32.217 | 99.99th=[ 249] 00:14:32.217 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:32.217 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:32.217 lat (usec) : 250=77.26%, 500=19.28%, 750=0.15% 00:14:32.217 lat (msec) : 50=3.31% 00:14:32.217 cpu : usr=0.19%, sys=0.87%, ctx=666, majf=0, minf=1 00:14:32.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.217 issued rwts: total=152,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.217 00:14:32.217 Run status group 0 (all jobs): 00:14:32.217 READ: bw=590KiB/s (604kB/s), 590KiB/s-590KiB/s (604kB/s-604kB/s), io=608KiB (623kB), run=1031-1031msec 00:14:32.217 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:14:32.217 00:14:32.217 Disk stats (read/write): 00:14:32.217 nvme0n1: ios=198/512, merge=0/0, ticks=1088/75, in_queue=1163, util=99.60% 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.217 rmmod nvme_tcp 00:14:32.217 rmmod nvme_fabrics 00:14:32.217 rmmod nvme_keyring 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1644003 ']' 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1644003 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1644003 ']' 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1644003 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1644003 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1644003' 00:14:32.217 killing process with pid 1644003 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1644003 00:14:32.217 20:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1644003 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.784 20:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.318 00:14:35.318 real 0m11.538s 00:14:35.318 user 0m24.026s 00:14:35.318 sys 0m3.330s 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.318 ************************************ 00:14:35.318 END TEST nvmf_nmic 00:14:35.318 ************************************ 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:35.318 ************************************ 00:14:35.318 START TEST nvmf_fio_target 00:14:35.318 ************************************ 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:35.318 * Looking for test storage... 00:14:35.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:35.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.318 --rc genhtml_branch_coverage=1 00:14:35.318 --rc genhtml_function_coverage=1 00:14:35.318 --rc genhtml_legend=1 00:14:35.318 --rc geninfo_all_blocks=1 00:14:35.318 --rc geninfo_unexecuted_blocks=1 00:14:35.318 00:14:35.318 ' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:35.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.318 --rc genhtml_branch_coverage=1 00:14:35.318 --rc genhtml_function_coverage=1 00:14:35.318 --rc genhtml_legend=1 00:14:35.318 --rc geninfo_all_blocks=1 00:14:35.318 --rc geninfo_unexecuted_blocks=1 00:14:35.318 00:14:35.318 ' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:35.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.318 --rc genhtml_branch_coverage=1 00:14:35.318 --rc genhtml_function_coverage=1 00:14:35.318 --rc genhtml_legend=1 00:14:35.318 --rc geninfo_all_blocks=1 00:14:35.318 --rc geninfo_unexecuted_blocks=1 00:14:35.318 00:14:35.318 ' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:35.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.318 --rc genhtml_branch_coverage=1 00:14:35.318 --rc genhtml_function_coverage=1 00:14:35.318 --rc genhtml_legend=1 00:14:35.318 --rc geninfo_all_blocks=1 00:14:35.318 --rc geninfo_unexecuted_blocks=1 00:14:35.318 00:14:35.318 ' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.318 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.319 20:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:38.607 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:38.607 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.607 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:38.608 Found net devices under 0000:84:00.0: cvl_0_0 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:38.608 Found net devices under 0000:84:00.1: cvl_0_1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:14:38.608 00:14:38.608 --- 10.0.0.2 ping statistics --- 00:14:38.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.608 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:14:38.608 00:14:38.608 --- 10.0.0.1 ping statistics --- 00:14:38.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.608 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1646967 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1646967 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1646967 ']' 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.608 20:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.608 [2024-10-08 20:42:06.973985] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:14:38.608 [2024-10-08 20:42:06.974097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.608 [2024-10-08 20:42:07.097327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.608 [2024-10-08 20:42:07.314944] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.608 [2024-10-08 20:42:07.315083] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.608 [2024-10-08 20:42:07.315142] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.608 [2024-10-08 20:42:07.315198] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.608 [2024-10-08 20:42:07.315237] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.608 [2024-10-08 20:42:07.319415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.608 [2024-10-08 20:42:07.319522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.608 [2024-10-08 20:42:07.319618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.608 [2024-10-08 20:42:07.319621] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.543 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.108 [2024-10-08 20:42:08.784162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.108 20:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.673 20:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:40.673 20:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.238 20:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:41.238 20:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.803 20:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:41.803 20:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:42.061 20:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:42.061 20:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:42.626 20:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:42.884 20:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:42.884 20:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.142 20:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:43.142 20:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.707 20:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:43.707 20:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:44.271 20:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:44.529 20:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:44.529 20:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.098 20:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:45.098 20:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.668 20:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.971 [2024-10-08 20:42:14.701128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.255 20:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:46.513 20:42:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:47.082 20:42:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:47.650 20:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:49.551 20:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:49.551 [global] 00:14:49.551 thread=1 00:14:49.551 invalidate=1 00:14:49.551 rw=write 00:14:49.551 time_based=1 00:14:49.551 runtime=1 00:14:49.551 ioengine=libaio 00:14:49.551 direct=1 00:14:49.551 bs=4096 00:14:49.551 iodepth=1 00:14:49.551 norandommap=0 00:14:49.551 numjobs=1 00:14:49.551 00:14:49.551 verify_dump=1 00:14:49.551 verify_backlog=512 00:14:49.551 verify_state_save=0 00:14:49.551 do_verify=1 00:14:49.551 verify=crc32c-intel 00:14:49.551 [job0] 00:14:49.551 filename=/dev/nvme0n1 00:14:49.551 [job1] 00:14:49.551 filename=/dev/nvme0n2 00:14:49.551 [job2] 00:14:49.551 filename=/dev/nvme0n3 00:14:49.551 [job3] 00:14:49.551 filename=/dev/nvme0n4 00:14:49.551 Could not set queue depth (nvme0n1) 00:14:49.551 Could not set queue depth (nvme0n2) 00:14:49.551 Could not set queue depth (nvme0n3) 00:14:49.551 Could not set queue depth (nvme0n4) 00:14:49.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:49.809 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:49.809 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:49.809 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:49.809 fio-3.35 00:14:49.809 Starting 4 threads 00:14:51.185 00:14:51.185 job0: (groupid=0, jobs=1): err= 0: pid=1648968: Tue Oct 8 20:42:19 2024 00:14:51.185 read: IOPS=229, BW=919KiB/s (941kB/s)(920KiB/1001msec) 00:14:51.185 slat (nsec): min=5173, max=18601, avg=9901.35, stdev=4522.08 00:14:51.185 clat (usec): min=190, max=41568, avg=3782.16, stdev=11502.90 00:14:51.185 lat (usec): min=196, max=41584, avg=3792.06, stdev=11504.62 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:14:51.185 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 245], 00:14:51.185 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[41157], 00:14:51.185 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:51.185 | 99.99th=[41681] 00:14:51.185 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:51.185 slat (usec): min=6, max=22406, avg=54.14, stdev=989.77 00:14:51.185 clat (usec): min=130, max=656, avg=192.07, stdev=59.43 00:14:51.185 lat (usec): min=138, max=22686, avg=246.21, stdev=995.54 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:14:51.185 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:14:51.185 | 70.00th=[ 200], 80.00th=[ 221], 90.00th=[ 277], 95.00th=[ 293], 00:14:51.185 | 99.00th=[ 347], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 660], 00:14:51.185 | 99.99th=[ 660] 00:14:51.185 bw ( KiB/s): min= 4096, max= 4096, per=25.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:51.185 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:51.185 lat (usec) : 250=77.36%, 500=19.54%, 750=0.40% 00:14:51.185 lat (msec) : 50=2.70% 00:14:51.185 cpu : usr=0.30%, sys=0.80%, ctx=744, majf=0, minf=1 00:14:51.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 issued rwts: total=230,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.185 job1: (groupid=0, jobs=1): err= 0: pid=1648970: Tue Oct 8 20:42:19 2024 00:14:51.185 read: IOPS=25, BW=101KiB/s (104kB/s)(104KiB/1026msec) 00:14:51.185 slat (nsec): min=8527, max=18211, avg=11619.27, stdev=2421.25 00:14:51.185 clat (usec): min=225, max=41938, avg=34710.13, stdev=14933.52 00:14:51.185 lat (usec): min=240, max=41949, avg=34721.75, stdev=14931.95 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 227], 5.00th=[ 326], 10.00th=[ 392], 20.00th=[40633], 00:14:51.185 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:51.185 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:51.185 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:51.185 | 99.99th=[41681] 00:14:51.185 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:14:51.185 slat (nsec): min=9709, max=52616, avg=11770.52, stdev=3183.85 00:14:51.185 clat (usec): min=147, max=427, avg=226.15, stdev=37.39 00:14:51.185 lat (usec): min=157, max=440, avg=237.92, stdev=37.93 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 176], 20.00th=[ 200], 00:14:51.185 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:14:51.185 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:14:51.185 | 99.00th=[ 359], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 429], 00:14:51.185 | 99.99th=[ 429] 00:14:51.185 bw ( KiB/s): min= 4096, max= 4096, per=25.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:51.185 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:51.185 lat (usec) : 250=81.78%, 500=13.94%, 750=0.19% 00:14:51.185 lat (msec) : 50=4.09% 00:14:51.185 cpu : usr=0.39%, sys=0.39%, ctx=538, majf=0, minf=1 00:14:51.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.185 job2: (groupid=0, jobs=1): err= 0: pid=1648971: Tue Oct 8 20:42:19 2024 00:14:51.185 read: IOPS=2068, BW=8276KiB/s (8474kB/s)(8284KiB/1001msec) 00:14:51.185 slat (nsec): min=7102, max=32357, avg=7985.81, stdev=1527.25 00:14:51.185 clat (usec): min=182, max=796, avg=238.56, stdev=36.95 00:14:51.185 lat (usec): min=190, max=804, avg=246.54, stdev=37.11 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 212], 00:14:51.185 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:14:51.185 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:14:51.185 | 99.00th=[ 334], 99.50th=[ 367], 99.90th=[ 685], 99.95th=[ 725], 00:14:51.185 | 99.99th=[ 799] 00:14:51.185 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:51.185 slat (nsec): min=9068, max=33328, avg=10483.30, stdev=2047.00 00:14:51.185 clat (usec): min=133, max=335, avg=176.44, stdev=29.33 00:14:51.185 lat (usec): min=142, max=346, avg=186.93, stdev=29.78 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:14:51.185 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:14:51.185 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 225], 95.00th=[ 241], 00:14:51.185 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 326], 00:14:51.185 | 99.99th=[ 334] 00:14:51.185 bw ( KiB/s): min=10096, max=10096, per=63.22%, avg=10096.00, stdev= 0.00, samples=1 00:14:51.185 iops : min= 2524, max= 2524, avg=2524.00, stdev= 0.00, samples=1 00:14:51.185 lat (usec) : 250=85.66%, 500=14.25%, 750=0.06%, 1000=0.02% 00:14:51.185 cpu : usr=2.20%, sys=6.60%, ctx=4632, majf=0, minf=1 00:14:51.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 issued rwts: total=2071,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.185 job3: (groupid=0, jobs=1): err= 0: pid=1648972: Tue Oct 8 20:42:19 2024 00:14:51.185 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:14:51.185 slat (nsec): min=8544, max=25205, avg=14695.29, stdev=2740.59 00:14:51.185 clat (usec): min=40929, max=41982, avg=41170.14, stdev=396.38 00:14:51.185 lat (usec): min=40938, max=41996, avg=41184.84, stdev=397.71 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:51.185 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:51.185 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:14:51.185 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:51.185 | 99.99th=[42206] 00:14:51.185 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:14:51.185 slat (usec): min=8, max=30753, avg=70.48, stdev=1358.68 00:14:51.185 clat (usec): min=146, max=400, avg=213.89, stdev=34.76 00:14:51.185 lat (usec): min=155, max=30974, avg=284.37, stdev=1359.45 00:14:51.185 clat percentiles (usec): 00:14:51.185 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:14:51.185 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 225], 00:14:51.185 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:14:51.185 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 400], 99.95th=[ 400], 00:14:51.185 | 99.99th=[ 400] 00:14:51.185 bw ( KiB/s): min= 4096, max= 4096, per=25.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:51.185 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:51.185 lat (usec) : 250=81.99%, 500=14.07% 00:14:51.185 lat (msec) : 50=3.94% 00:14:51.185 cpu : usr=0.40%, sys=0.59%, ctx=535, majf=0, minf=1 00:14:51.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.185 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.185 00:14:51.185 Run status group 0 (all jobs): 00:14:51.185 READ: bw=9154KiB/s (9374kB/s), 83.0KiB/s-8276KiB/s (85.0kB/s-8474kB/s), io=9392KiB (9617kB), run=1001-1026msec 00:14:51.185 WRITE: bw=15.6MiB/s (16.4MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1026msec 00:14:51.185 00:14:51.185 Disk stats (read/write): 00:14:51.185 nvme0n1: ios=43/512, merge=0/0, ticks=1561/99, in_queue=1660, util=85.37% 00:14:51.185 nvme0n2: ios=69/512, merge=0/0, ticks=771/118, in_queue=889, util=90.94% 00:14:51.185 nvme0n3: ios=1925/2048, merge=0/0, ticks=581/359, in_queue=940, util=93.40% 00:14:51.185 nvme0n4: ios=69/512, merge=0/0, ticks=931/103, in_queue=1034, util=95.35% 00:14:51.185 20:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:51.185 [global] 00:14:51.185 thread=1 00:14:51.185 invalidate=1 00:14:51.185 rw=randwrite 00:14:51.185 time_based=1 00:14:51.185 runtime=1 00:14:51.185 ioengine=libaio 00:14:51.185 direct=1 00:14:51.185 bs=4096 00:14:51.185 iodepth=1 00:14:51.185 norandommap=0 00:14:51.185 numjobs=1 00:14:51.185 00:14:51.185 verify_dump=1 00:14:51.185 verify_backlog=512 00:14:51.185 verify_state_save=0 00:14:51.185 do_verify=1 00:14:51.185 verify=crc32c-intel 00:14:51.185 [job0] 00:14:51.185 filename=/dev/nvme0n1 00:14:51.185 [job1] 00:14:51.185 filename=/dev/nvme0n2 00:14:51.185 [job2] 00:14:51.185 filename=/dev/nvme0n3 00:14:51.185 [job3] 00:14:51.185 filename=/dev/nvme0n4 00:14:51.185 Could not set queue depth (nvme0n1) 00:14:51.185 Could not set queue depth (nvme0n2) 00:14:51.185 Could not set queue depth (nvme0n3) 00:14:51.185 Could not set queue depth (nvme0n4) 00:14:51.443 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:51.443 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:51.443 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:51.443 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:51.443 fio-3.35 00:14:51.443 Starting 4 threads 00:14:52.819 00:14:52.819 job0: (groupid=0, jobs=1): err= 0: pid=1649196: Tue Oct 8 20:42:21 2024 00:14:52.819 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:14:52.819 slat (nsec): min=9853, max=17639, avg=14299.41, stdev=1562.57 00:14:52.819 clat (usec): min=40951, max=41988, avg=41085.34, stdev=283.97 00:14:52.819 lat (usec): min=40965, max=41999, avg=41099.64, stdev=283.33 00:14:52.819 clat percentiles (usec): 00:14:52.819 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:52.819 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:52.819 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:52.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:52.819 | 99.99th=[42206] 00:14:52.819 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:14:52.819 slat (nsec): min=10203, max=44992, avg=13494.66, stdev=3624.77 00:14:52.819 clat (usec): min=157, max=833, avg=221.33, stdev=60.72 00:14:52.819 lat (usec): min=167, max=847, avg=234.82, stdev=60.58 00:14:52.819 clat percentiles (usec): 00:14:52.819 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:14:52.819 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 219], 00:14:52.820 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:14:52.820 | 99.00th=[ 359], 99.50th=[ 652], 99.90th=[ 832], 99.95th=[ 832], 00:14:52.820 | 99.99th=[ 832] 00:14:52.820 bw ( KiB/s): min= 4096, max= 4096, per=22.62%, avg=4096.00, stdev= 0.00, samples=1 00:14:52.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:52.820 lat (usec) : 250=70.79%, 500=24.34%, 750=0.37%, 1000=0.37% 00:14:52.820 lat (msec) : 50=4.12% 00:14:52.820 cpu : usr=0.58%, sys=0.58%, ctx=535, majf=0, minf=1 00:14:52.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:52.820 job1: (groupid=0, jobs=1): err= 0: pid=1649206: Tue Oct 8 20:42:21 2024 00:14:52.820 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:52.820 slat (nsec): min=5910, max=29706, avg=7251.07, stdev=2244.20 00:14:52.820 clat (usec): min=186, max=41254, avg=437.46, stdev=2728.77 00:14:52.820 lat (usec): min=193, max=41260, avg=444.71, stdev=2728.74 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:14:52.820 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:14:52.820 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 355], 95.00th=[ 375], 00:14:52.820 | 99.00th=[ 529], 99.50th=[ 619], 99.90th=[40633], 99.95th=[41157], 00:14:52.820 | 99.99th=[41157] 00:14:52.820 write: IOPS=1602, BW=6410KiB/s (6563kB/s)(6416KiB/1001msec); 0 zone resets 00:14:52.820 slat (nsec): min=7568, max=59095, avg=9283.38, stdev=2818.68 00:14:52.820 clat (usec): min=129, max=501, avg=182.78, stdev=27.65 00:14:52.820 lat (usec): min=137, max=560, avg=192.06, stdev=28.20 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 161], 00:14:52.820 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:14:52.820 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 229], 00:14:52.820 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 396], 99.95th=[ 502], 00:14:52.820 | 99.99th=[ 502] 00:14:52.820 bw ( KiB/s): min= 8192, max= 8192, per=45.24%, avg=8192.00, stdev= 0.00, samples=1 00:14:52.820 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:52.820 lat (usec) : 250=81.94%, 500=17.52%, 750=0.32% 00:14:52.820 lat (msec) : 50=0.22% 00:14:52.820 cpu : usr=2.00%, sys=3.50%, ctx=3143, majf=0, minf=1 00:14:52.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 issued rwts: total=1536,1604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:52.820 job2: (groupid=0, jobs=1): err= 0: pid=1649208: Tue Oct 8 20:42:21 2024 00:14:52.820 read: IOPS=1821, BW=7288KiB/s (7463kB/s)(7492KiB/1028msec) 00:14:52.820 slat (nsec): min=5879, max=29076, avg=8489.90, stdev=2213.38 00:14:52.820 clat (usec): min=178, max=40977, avg=323.85, stdev=1868.36 00:14:52.820 lat (usec): min=186, max=40993, avg=332.34, stdev=1868.44 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:14:52.820 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:14:52.820 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 297], 00:14:52.820 | 99.00th=[ 515], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:14:52.820 | 99.99th=[41157] 00:14:52.820 write: IOPS=1992, BW=7969KiB/s (8160kB/s)(8192KiB/1028msec); 0 zone resets 00:14:52.820 slat (nsec): min=7396, max=62623, avg=10589.06, stdev=3369.33 00:14:52.820 clat (usec): min=129, max=405, avg=181.50, stdev=33.75 00:14:52.820 lat (usec): min=138, max=415, avg=192.08, stdev=34.44 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:14:52.820 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:14:52.820 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 231], 95.00th=[ 245], 00:14:52.820 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 375], 99.95th=[ 388], 00:14:52.820 | 99.99th=[ 408] 00:14:52.820 bw ( KiB/s): min= 5872, max=10512, per=45.24%, avg=8192.00, stdev=3280.98, samples=2 00:14:52.820 iops : min= 1468, max= 2628, avg=2048.00, stdev=820.24, samples=2 00:14:52.820 lat (usec) : 250=88.93%, 500=10.48%, 750=0.48% 00:14:52.820 lat (msec) : 50=0.10% 00:14:52.820 cpu : usr=1.75%, sys=4.97%, ctx=3922, majf=0, minf=1 00:14:52.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 issued rwts: total=1873,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:52.820 job3: (groupid=0, jobs=1): err= 0: pid=1649209: Tue Oct 8 20:42:21 2024 00:14:52.820 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:14:52.820 slat (nsec): min=9850, max=33359, avg=15401.18, stdev=4290.88 00:14:52.820 clat (usec): min=40950, max=44948, avg=41427.87, stdev=902.76 00:14:52.820 lat (usec): min=40964, max=44965, avg=41443.27, stdev=903.84 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:52.820 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:52.820 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:14:52.820 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:52.820 | 99.99th=[44827] 00:14:52.820 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:14:52.820 slat (nsec): min=9087, max=41188, avg=13573.69, stdev=6013.92 00:14:52.820 clat (usec): min=160, max=1213, avg=218.74, stdev=49.66 00:14:52.820 lat (usec): min=171, max=1223, avg=232.32, stdev=49.91 00:14:52.820 clat percentiles (usec): 00:14:52.820 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:14:52.820 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:14:52.820 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 253], 00:14:52.820 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 1221], 99.95th=[ 1221], 00:14:52.820 | 99.99th=[ 1221] 00:14:52.820 bw ( KiB/s): min= 4096, max= 4096, per=22.62%, avg=4096.00, stdev= 0.00, samples=1 00:14:52.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:52.820 lat (usec) : 250=90.45%, 500=5.24% 00:14:52.820 lat (msec) : 2=0.19%, 50=4.12% 00:14:52.820 cpu : usr=0.48%, sys=0.68%, ctx=534, majf=0, minf=1 00:14:52.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.820 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:52.820 00:14:52.820 Run status group 0 (all jobs): 00:14:52.821 READ: bw=13.1MiB/s (13.7MB/s), 85.2KiB/s-7288KiB/s (87.2kB/s-7463kB/s), io=13.5MiB (14.1MB), run=1001-1033msec 00:14:52.821 WRITE: bw=17.7MiB/s (18.5MB/s), 1983KiB/s-7969KiB/s (2030kB/s-8160kB/s), io=18.3MiB (19.2MB), run=1001-1033msec 00:14:52.821 00:14:52.821 Disk stats (read/write): 00:14:52.821 nvme0n1: ios=69/512, merge=0/0, ticks=1545/104, in_queue=1649, util=98.60% 00:14:52.821 nvme0n2: ios=1388/1536, merge=0/0, ticks=1474/269, in_queue=1743, util=98.67% 00:14:52.821 nvme0n3: ios=1580/1590, merge=0/0, ticks=656/283, in_queue=939, util=99.36% 00:14:52.821 nvme0n4: ios=16/512, merge=0/0, ticks=666/108, in_queue=774, util=89.31% 00:14:52.821 20:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:52.821 [global] 00:14:52.821 thread=1 00:14:52.821 invalidate=1 00:14:52.821 rw=write 00:14:52.821 time_based=1 00:14:52.821 runtime=1 00:14:52.821 ioengine=libaio 00:14:52.821 direct=1 00:14:52.821 bs=4096 00:14:52.821 iodepth=128 00:14:52.821 norandommap=0 00:14:52.821 numjobs=1 00:14:52.821 00:14:52.821 verify_dump=1 00:14:52.821 verify_backlog=512 00:14:52.821 verify_state_save=0 00:14:52.821 do_verify=1 00:14:52.821 verify=crc32c-intel 00:14:52.821 [job0] 00:14:52.821 filename=/dev/nvme0n1 00:14:52.821 [job1] 00:14:52.821 filename=/dev/nvme0n2 00:14:52.821 [job2] 00:14:52.821 filename=/dev/nvme0n3 00:14:52.821 [job3] 00:14:52.821 filename=/dev/nvme0n4 00:14:52.821 Could not set queue depth (nvme0n1) 00:14:52.821 Could not set queue depth (nvme0n2) 00:14:52.821 Could not set queue depth (nvme0n3) 00:14:52.821 Could not set queue depth (nvme0n4) 00:14:52.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:52.821 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:52.821 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:52.821 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:52.821 fio-3.35 00:14:52.821 Starting 4 threads 00:14:54.217 00:14:54.217 job0: (groupid=0, jobs=1): err= 0: pid=1649435: Tue Oct 8 20:42:22 2024 00:14:54.217 read: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1005msec) 00:14:54.217 slat (usec): min=2, max=25274, avg=138.44, stdev=956.46 00:14:54.217 clat (usec): min=3840, max=49850, avg=16479.17, stdev=7363.03 00:14:54.217 lat (usec): min=5684, max=49853, avg=16617.61, stdev=7430.19 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[11207], 20.00th=[11863], 00:14:54.217 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12780], 60.00th=[15008], 00:14:54.217 | 70.00th=[18220], 80.00th=[20841], 90.00th=[25560], 95.00th=[34866], 00:14:54.217 | 99.00th=[41681], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:14:54.217 | 99.99th=[50070] 00:14:54.217 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:14:54.217 slat (usec): min=3, max=11416, avg=146.09, stdev=789.05 00:14:54.217 clat (usec): min=900, max=72168, avg=20609.97, stdev=14439.83 00:14:54.217 lat (usec): min=930, max=72174, avg=20756.05, stdev=14516.53 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 3195], 5.00th=[ 7570], 10.00th=[ 9896], 20.00th=[10945], 00:14:54.217 | 30.00th=[11994], 40.00th=[12911], 50.00th=[15139], 60.00th=[15664], 00:14:54.217 | 70.00th=[18482], 80.00th=[36963], 90.00th=[43254], 95.00th=[50070], 00:14:54.217 | 99.00th=[69731], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:14:54.217 | 99.99th=[71828] 00:14:54.217 bw ( KiB/s): min=12288, max=16384, per=22.67%, avg=14336.00, stdev=2896.31, samples=2 00:14:54.217 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:14:54.217 lat (usec) : 1000=0.01% 00:14:54.217 lat (msec) : 2=0.23%, 4=0.56%, 10=7.92%, 20=65.89%, 50=22.76% 00:14:54.217 lat (msec) : 100=2.63% 00:14:54.217 cpu : usr=2.29%, sys=4.38%, ctx=322, majf=0, minf=1 00:14:54.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:54.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.217 issued rwts: total=3249,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.217 job1: (groupid=0, jobs=1): err= 0: pid=1649436: Tue Oct 8 20:42:22 2024 00:14:54.217 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:14:54.217 slat (usec): min=2, max=36846, avg=123.13, stdev=984.56 00:14:54.217 clat (usec): min=1208, max=98062, avg=15840.25, stdev=12960.02 00:14:54.217 lat (usec): min=1217, max=98093, avg=15963.37, stdev=13030.45 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 3523], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10421], 00:14:54.217 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:14:54.217 | 70.00th=[12780], 80.00th=[20055], 90.00th=[21627], 95.00th=[29492], 00:14:54.217 | 99.00th=[82314], 99.50th=[91751], 99.90th=[98042], 99.95th=[98042], 00:14:54.217 | 99.99th=[98042] 00:14:54.217 write: IOPS=4108, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:14:54.217 slat (usec): min=3, max=11349, avg=113.16, stdev=681.08 00:14:54.217 clat (usec): min=779, max=81292, avg=15103.40, stdev=13691.86 00:14:54.217 lat (usec): min=2664, max=81301, avg=15216.56, stdev=13762.67 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 4359], 5.00th=[ 7832], 10.00th=[ 9503], 20.00th=[ 9896], 00:14:54.217 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11338], 60.00th=[11731], 00:14:54.217 | 70.00th=[12256], 80.00th=[13173], 90.00th=[20317], 95.00th=[53216], 00:14:54.217 | 99.00th=[77071], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:14:54.217 | 99.99th=[81265] 00:14:54.217 bw ( KiB/s): min=12288, max=20480, per=25.91%, avg=16384.00, stdev=5792.62, samples=2 00:14:54.217 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:14:54.217 lat (usec) : 1000=0.01% 00:14:54.217 lat (msec) : 2=0.11%, 4=0.73%, 10=17.51%, 20=66.13%, 50=11.04% 00:14:54.217 lat (msec) : 100=4.47% 00:14:54.217 cpu : usr=2.79%, sys=5.48%, ctx=424, majf=0, minf=1 00:14:54.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:54.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.217 issued rwts: total=4096,4129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.217 job2: (groupid=0, jobs=1): err= 0: pid=1649437: Tue Oct 8 20:42:22 2024 00:14:54.217 read: IOPS=3607, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec) 00:14:54.217 slat (usec): min=2, max=47429, avg=138.34, stdev=1091.58 00:14:54.217 clat (usec): min=835, max=79030, avg=16141.79, stdev=8703.59 00:14:54.217 lat (usec): min=6472, max=79048, avg=16280.12, stdev=8785.18 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12125], 00:14:54.217 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:14:54.217 | 70.00th=[14353], 80.00th=[16450], 90.00th=[27132], 95.00th=[33424], 00:14:54.217 | 99.00th=[43254], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:14:54.217 | 99.99th=[79168] 00:14:54.217 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:14:54.217 slat (usec): min=4, max=8055, avg=115.79, stdev=650.34 00:14:54.217 clat (usec): min=5148, max=76214, avg=16725.80, stdev=9102.49 00:14:54.217 lat (usec): min=5158, max=76222, avg=16841.59, stdev=9120.72 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 7111], 5.00th=[10945], 10.00th=[11469], 20.00th=[12125], 00:14:54.217 | 30.00th=[12518], 40.00th=[13435], 50.00th=[14353], 60.00th=[15139], 00:14:54.217 | 70.00th=[16909], 80.00th=[20579], 90.00th=[21627], 95.00th=[23987], 00:14:54.217 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:14:54.217 | 99.99th=[76022] 00:14:54.217 bw ( KiB/s): min=11600, max=20521, per=25.40%, avg=16060.50, stdev=6308.10, samples=2 00:14:54.217 iops : min= 2900, max= 5130, avg=4015.00, stdev=1576.85, samples=2 00:14:54.217 lat (usec) : 1000=0.01% 00:14:54.217 lat (msec) : 10=4.93%, 20=73.85%, 50=19.55%, 100=1.64% 00:14:54.217 cpu : usr=2.99%, sys=5.68%, ctx=369, majf=0, minf=1 00:14:54.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:54.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.217 issued rwts: total=3626,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.217 job3: (groupid=0, jobs=1): err= 0: pid=1649438: Tue Oct 8 20:42:22 2024 00:14:54.217 read: IOPS=3659, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1006msec) 00:14:54.217 slat (usec): min=2, max=12827, avg=131.77, stdev=844.43 00:14:54.217 clat (usec): min=676, max=39803, avg=16591.77, stdev=6778.12 00:14:54.217 lat (usec): min=6130, max=39819, avg=16723.54, stdev=6850.76 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 6259], 5.00th=[ 9634], 10.00th=[11469], 20.00th=[12256], 00:14:54.217 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14222], 60.00th=[15139], 00:14:54.217 | 70.00th=[16581], 80.00th=[21365], 90.00th=[28181], 95.00th=[32375], 00:14:54.217 | 99.00th=[34341], 99.50th=[34866], 99.90th=[38536], 99.95th=[39060], 00:14:54.217 | 99.99th=[39584] 00:14:54.217 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:14:54.217 slat (usec): min=3, max=11224, avg=117.13, stdev=718.89 00:14:54.217 clat (usec): min=706, max=36604, avg=16242.37, stdev=6912.29 00:14:54.217 lat (usec): min=734, max=44175, avg=16359.49, stdev=6958.54 00:14:54.217 clat percentiles (usec): 00:14:54.217 | 1.00th=[ 2737], 5.00th=[ 6718], 10.00th=[ 9765], 20.00th=[11994], 00:14:54.217 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14746], 60.00th=[15533], 00:14:54.217 | 70.00th=[16909], 80.00th=[20317], 90.00th=[26870], 95.00th=[31065], 00:14:54.217 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:14:54.217 | 99.99th=[36439] 00:14:54.217 bw ( KiB/s): min=12048, max=20472, per=25.71%, avg=16260.00, stdev=5956.67, samples=2 00:14:54.217 iops : min= 3012, max= 5118, avg=4065.00, stdev=1489.17, samples=2 00:14:54.217 lat (usec) : 750=0.04%, 1000=0.12% 00:14:54.217 lat (msec) : 2=0.08%, 4=0.81%, 10=8.98%, 20=67.69%, 50=22.30% 00:14:54.217 cpu : usr=2.59%, sys=4.98%, ctx=361, majf=0, minf=1 00:14:54.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:54.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.217 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.217 00:14:54.217 Run status group 0 (all jobs): 00:14:54.217 READ: bw=56.9MiB/s (59.7MB/s), 12.6MiB/s-15.9MiB/s (13.2MB/s-16.7MB/s), io=57.2MiB (60.0MB), run=1005-1006msec 00:14:54.217 WRITE: bw=61.8MiB/s (64.8MB/s), 13.9MiB/s-16.0MiB/s (14.6MB/s-16.8MB/s), io=62.1MiB (65.1MB), run=1005-1006msec 00:14:54.217 00:14:54.217 Disk stats (read/write): 00:14:54.217 nvme0n1: ios=2598/3071, merge=0/0, ticks=23841/38894, in_queue=62735, util=98.50% 00:14:54.217 nvme0n2: ios=3282/3584, merge=0/0, ticks=28201/38068, in_queue=66269, util=96.84% 00:14:54.217 nvme0n3: ios=3092/3552, merge=0/0, ticks=21393/21118, in_queue=42511, util=98.63% 00:14:54.217 nvme0n4: ios=3471/3584, merge=0/0, ticks=37610/36985, in_queue=74595, util=89.59% 00:14:54.217 20:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:54.217 [global] 00:14:54.217 thread=1 00:14:54.217 invalidate=1 00:14:54.217 rw=randwrite 00:14:54.217 time_based=1 00:14:54.218 runtime=1 00:14:54.218 ioengine=libaio 00:14:54.218 direct=1 00:14:54.218 bs=4096 00:14:54.218 iodepth=128 00:14:54.218 norandommap=0 00:14:54.218 numjobs=1 00:14:54.218 00:14:54.218 verify_dump=1 00:14:54.218 verify_backlog=512 00:14:54.218 verify_state_save=0 00:14:54.218 do_verify=1 00:14:54.218 verify=crc32c-intel 00:14:54.218 [job0] 00:14:54.218 filename=/dev/nvme0n1 00:14:54.218 [job1] 00:14:54.218 filename=/dev/nvme0n2 00:14:54.218 [job2] 00:14:54.218 filename=/dev/nvme0n3 00:14:54.218 [job3] 00:14:54.218 filename=/dev/nvme0n4 00:14:54.218 Could not set queue depth (nvme0n1) 00:14:54.218 Could not set queue depth (nvme0n2) 00:14:54.218 Could not set queue depth (nvme0n3) 00:14:54.218 Could not set queue depth (nvme0n4) 00:14:54.477 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:54.477 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:54.477 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:54.477 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:54.477 fio-3.35 00:14:54.477 Starting 4 threads 00:14:55.854 00:14:55.854 job0: (groupid=0, jobs=1): err= 0: pid=1649782: Tue Oct 8 20:42:24 2024 00:14:55.854 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:14:55.854 slat (usec): min=2, max=24653, avg=174.39, stdev=1337.94 00:14:55.854 clat (usec): min=7915, max=77565, avg=20817.91, stdev=14587.12 00:14:55.854 lat (usec): min=9273, max=77722, avg=20992.30, stdev=14700.09 00:14:55.854 clat percentiles (usec): 00:14:55.854 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11600], 20.00th=[12125], 00:14:55.854 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12780], 60.00th=[15401], 00:14:55.854 | 70.00th=[22414], 80.00th=[30802], 90.00th=[37487], 95.00th=[52167], 00:14:55.854 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:14:55.854 | 99.99th=[77071] 00:14:55.854 write: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1004msec); 0 zone resets 00:14:55.854 slat (usec): min=4, max=25126, avg=186.85, stdev=1412.88 00:14:55.854 clat (usec): min=3093, max=90530, avg=25045.38, stdev=22305.80 00:14:55.854 lat (usec): min=3805, max=90546, avg=25232.23, stdev=22415.34 00:14:55.854 clat percentiles (usec): 00:14:55.854 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[11076], 20.00th=[11731], 00:14:55.854 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[14091], 00:14:55.854 | 70.00th=[21890], 80.00th=[39060], 90.00th=[66847], 95.00th=[76022], 00:14:55.854 | 99.00th=[89654], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:14:55.854 | 99.99th=[90702] 00:14:55.854 bw ( KiB/s): min= 4360, max=17488, per=17.03%, avg=10924.00, stdev=9282.90, samples=2 00:14:55.854 iops : min= 1090, max= 4372, avg=2731.00, stdev=2320.72, samples=2 00:14:55.854 lat (msec) : 4=0.17%, 10=3.05%, 20=65.95%, 50=19.73%, 100=11.11% 00:14:55.854 cpu : usr=2.79%, sys=3.49%, ctx=254, majf=0, minf=1 00:14:55.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:55.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.854 issued rwts: total=2560,2858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.854 job1: (groupid=0, jobs=1): err= 0: pid=1649783: Tue Oct 8 20:42:24 2024 00:14:55.854 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:14:55.854 slat (usec): min=2, max=11341, avg=113.92, stdev=676.26 00:14:55.854 clat (usec): min=8553, max=40855, avg=14454.26, stdev=4905.58 00:14:55.854 lat (usec): min=8564, max=40861, avg=14568.17, stdev=4960.26 00:14:55.854 clat percentiles (usec): 00:14:55.854 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10683], 20.00th=[11994], 00:14:55.854 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:14:55.854 | 70.00th=[14877], 80.00th=[16057], 90.00th=[19006], 95.00th=[22414], 00:14:55.854 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:14:55.854 | 99.99th=[40633] 00:14:55.854 write: IOPS=3206, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1003msec); 0 zone resets 00:14:55.854 slat (usec): min=3, max=14426, avg=194.94, stdev=1093.77 00:14:55.854 clat (usec): min=408, max=151547, avg=25768.91, stdev=32030.26 00:14:55.854 lat (usec): min=732, max=151553, avg=25963.84, stdev=32256.22 00:14:55.854 clat percentiles (msec): 00:14:55.854 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:14:55.854 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:14:55.854 | 70.00th=[ 16], 80.00th=[ 29], 90.00th=[ 66], 95.00th=[ 124], 00:14:55.854 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:14:55.854 | 99.99th=[ 153] 00:14:55.854 bw ( KiB/s): min= 8184, max=16520, per=19.26%, avg=12352.00, stdev=5894.44, samples=2 00:14:55.855 iops : min= 2046, max= 4130, avg=3088.00, stdev=1473.61, samples=2 00:14:55.855 lat (usec) : 500=0.02%, 750=0.08% 00:14:55.855 lat (msec) : 4=0.37%, 10=5.04%, 20=77.26%, 50=11.40%, 100=2.04% 00:14:55.855 lat (msec) : 250=3.80% 00:14:55.855 cpu : usr=2.20%, sys=4.99%, ctx=339, majf=0, minf=1 00:14:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.855 issued rwts: total=3072,3216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.855 job2: (groupid=0, jobs=1): err= 0: pid=1649790: Tue Oct 8 20:42:24 2024 00:14:55.855 read: IOPS=4825, BW=18.8MiB/s (19.8MB/s)(19.0MiB/1007msec) 00:14:55.855 slat (usec): min=3, max=12604, avg=100.71, stdev=699.42 00:14:55.855 clat (usec): min=1105, max=37369, avg=13198.54, stdev=4062.44 00:14:55.855 lat (usec): min=1112, max=37379, avg=13299.24, stdev=4093.54 00:14:55.855 clat percentiles (usec): 00:14:55.855 | 1.00th=[ 4555], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[10421], 00:14:55.855 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:14:55.855 | 70.00th=[13960], 80.00th=[16319], 90.00th=[19530], 95.00th=[20841], 00:14:55.855 | 99.00th=[24773], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:14:55.855 | 99.99th=[37487] 00:14:55.855 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:14:55.855 slat (usec): min=5, max=11600, avg=87.41, stdev=449.97 00:14:55.855 clat (usec): min=1027, max=32647, avg=12324.20, stdev=4347.97 00:14:55.855 lat (usec): min=1046, max=32657, avg=12411.61, stdev=4374.52 00:14:55.855 clat percentiles (usec): 00:14:55.855 | 1.00th=[ 4293], 5.00th=[ 5997], 10.00th=[ 7439], 20.00th=[ 9634], 00:14:55.855 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[13042], 00:14:55.855 | 70.00th=[13435], 80.00th=[14091], 90.00th=[14746], 95.00th=[20317], 00:14:55.855 | 99.00th=[30278], 99.50th=[30278], 99.90th=[32637], 99.95th=[32637], 00:14:55.855 | 99.99th=[32637] 00:14:55.855 bw ( KiB/s): min=16976, max=23984, per=31.93%, avg=20480.00, stdev=4955.40, samples=2 00:14:55.855 iops : min= 4244, max= 5996, avg=5120.00, stdev=1238.85, samples=2 00:14:55.855 lat (msec) : 2=0.24%, 4=0.40%, 10=17.09%, 20=75.74%, 50=6.53% 00:14:55.855 cpu : usr=6.86%, sys=8.55%, ctx=603, majf=0, minf=1 00:14:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.855 issued rwts: total=4859,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.855 job3: (groupid=0, jobs=1): err= 0: pid=1649791: Tue Oct 8 20:42:24 2024 00:14:55.855 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:14:55.855 slat (usec): min=2, max=12998, avg=97.55, stdev=661.99 00:14:55.855 clat (usec): min=1959, max=27478, avg=13441.35, stdev=3002.67 00:14:55.855 lat (usec): min=1974, max=29022, avg=13538.91, stdev=3033.23 00:14:55.855 clat percentiles (usec): 00:14:55.855 | 1.00th=[ 3523], 5.00th=[ 7439], 10.00th=[10290], 20.00th=[12125], 00:14:55.855 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13698], 60.00th=[13829], 00:14:55.855 | 70.00th=[14353], 80.00th=[14615], 90.00th=[16319], 95.00th=[18482], 00:14:55.855 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23725], 99.95th=[24773], 00:14:55.855 | 99.99th=[27395] 00:14:55.855 write: IOPS=4938, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1003msec); 0 zone resets 00:14:55.855 slat (usec): min=3, max=18462, avg=100.52, stdev=753.27 00:14:55.855 clat (usec): min=337, max=46245, avg=13177.77, stdev=4746.14 00:14:55.855 lat (usec): min=371, max=46252, avg=13278.29, stdev=4778.96 00:14:55.855 clat percentiles (usec): 00:14:55.855 | 1.00th=[ 2999], 5.00th=[ 5145], 10.00th=[ 9503], 20.00th=[10945], 00:14:55.855 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13435], 00:14:55.855 | 70.00th=[13829], 80.00th=[14877], 90.00th=[18220], 95.00th=[20841], 00:14:55.855 | 99.00th=[28967], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:14:55.855 | 99.99th=[46400] 00:14:55.855 bw ( KiB/s): min=18120, max=20480, per=30.09%, avg=19300.00, stdev=1668.77, samples=2 00:14:55.855 iops : min= 4530, max= 5120, avg=4825.00, stdev=417.19, samples=2 00:14:55.855 lat (usec) : 500=0.04% 00:14:55.855 lat (msec) : 2=0.40%, 4=2.01%, 10=7.45%, 20=85.42%, 50=4.69% 00:14:55.855 cpu : usr=3.59%, sys=7.29%, ctx=295, majf=0, minf=1 00:14:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.855 issued rwts: total=4608,4953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.855 00:14:55.855 Run status group 0 (all jobs): 00:14:55.855 READ: bw=58.6MiB/s (61.4MB/s), 9.96MiB/s-18.8MiB/s (10.4MB/s-19.8MB/s), io=59.0MiB (61.8MB), run=1003-1007msec 00:14:55.855 WRITE: bw=62.6MiB/s (65.7MB/s), 11.1MiB/s-19.9MiB/s (11.7MB/s-20.8MB/s), io=63.1MiB (66.1MB), run=1003-1007msec 00:14:55.855 00:14:55.855 Disk stats (read/write): 00:14:55.855 nvme0n1: ios=1586/2048, merge=0/0, ticks=12066/16363, in_queue=28429, util=88.38% 00:14:55.855 nvme0n2: ios=2048/2391, merge=0/0, ticks=17964/55993, in_queue=73957, util=83.95% 00:14:55.855 nvme0n3: ios=4141/4431, merge=0/0, ticks=50555/50243, in_queue=100798, util=99.57% 00:14:55.855 nvme0n4: ios=3919/4096, merge=0/0, ticks=32364/31670, in_queue=64034, util=99.89% 00:14:55.855 20:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:55.855 20:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1649927 00:14:55.855 20:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:55.855 20:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:55.855 [global] 00:14:55.855 thread=1 00:14:55.855 invalidate=1 00:14:55.855 rw=read 00:14:55.855 time_based=1 00:14:55.855 runtime=10 00:14:55.855 ioengine=libaio 00:14:55.855 direct=1 00:14:55.855 bs=4096 00:14:55.855 iodepth=1 00:14:55.855 norandommap=1 00:14:55.855 numjobs=1 00:14:55.855 00:14:55.855 [job0] 00:14:55.855 filename=/dev/nvme0n1 00:14:55.855 [job1] 00:14:55.855 filename=/dev/nvme0n2 00:14:55.855 [job2] 00:14:55.855 filename=/dev/nvme0n3 00:14:55.855 [job3] 00:14:55.855 filename=/dev/nvme0n4 00:14:55.855 Could not set queue depth (nvme0n1) 00:14:55.855 Could not set queue depth (nvme0n2) 00:14:55.855 Could not set queue depth (nvme0n3) 00:14:55.855 Could not set queue depth (nvme0n4) 00:14:55.855 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:55.855 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:55.855 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:55.855 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:55.855 fio-3.35 00:14:55.855 Starting 4 threads 00:14:59.140 20:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:59.140 20:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:59.140 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26959872, buflen=4096 00:14:59.140 fio: pid=1650023, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:59.398 20:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:59.398 20:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:59.398 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=413696, buflen=4096 00:14:59.398 fio: pid=1650022, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:59.658 20:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:59.658 20:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:59.658 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1351680, buflen=4096 00:14:59.658 fio: pid=1650020, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:00.228 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14880768, buflen=4096 00:15:00.228 fio: pid=1650021, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:00.228 20:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:00.228 20:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:00.228 00:15:00.228 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650020: Tue Oct 8 20:42:28 2024 00:15:00.228 read: IOPS=89, BW=356KiB/s (365kB/s)(1320KiB/3706msec) 00:15:00.228 slat (usec): min=6, max=6144, avg=45.22, stdev=428.80 00:15:00.228 clat (usec): min=173, max=41998, avg=11109.71, stdev=18051.74 00:15:00.228 lat (usec): min=192, max=47294, avg=11155.01, stdev=18114.19 00:15:00.228 clat percentiles (usec): 00:15:00.228 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 212], 00:15:00.228 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 253], 00:15:00.228 | 70.00th=[ 306], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:00.228 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:00.228 | 99.99th=[42206] 00:15:00.228 bw ( KiB/s): min= 96, max= 1744, per=3.59%, avg=371.29, stdev=611.56, samples=7 00:15:00.228 iops : min= 24, max= 436, avg=92.71, stdev=152.94, samples=7 00:15:00.228 lat (usec) : 250=57.10%, 500=14.50%, 750=1.51% 00:15:00.228 lat (msec) : 50=26.59% 00:15:00.228 cpu : usr=0.00%, sys=0.19%, ctx=336, majf=0, minf=1 00:15:00.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 issued rwts: total=331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.228 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650021: Tue Oct 8 20:42:28 2024 00:15:00.228 read: IOPS=882, BW=3531KiB/s (3615kB/s)(14.2MiB/4116msec) 00:15:00.228 slat (usec): min=5, max=14756, avg=17.47, stdev=327.33 00:15:00.228 clat (usec): min=165, max=42173, avg=1106.39, stdev=5952.15 00:15:00.228 lat (usec): min=172, max=42180, avg=1123.87, stdev=5961.07 00:15:00.228 clat percentiles (usec): 00:15:00.228 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:15:00.228 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:15:00.228 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 314], 00:15:00.228 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:00.228 | 99.99th=[42206] 00:15:00.228 bw ( KiB/s): min= 97, max= 9464, per=35.09%, avg=3630.13, stdev=3577.12, samples=8 00:15:00.228 iops : min= 24, max= 2366, avg=907.50, stdev=894.31, samples=8 00:15:00.228 lat (usec) : 250=81.21%, 500=16.43%, 750=0.11%, 1000=0.06% 00:15:00.228 lat (msec) : 50=2.17% 00:15:00.228 cpu : usr=0.27%, sys=1.29%, ctx=3639, majf=0, minf=1 00:15:00.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 issued rwts: total=3634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.228 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650022: Tue Oct 8 20:42:28 2024 00:15:00.228 read: IOPS=30, BW=121KiB/s (124kB/s)(404KiB/3336msec) 00:15:00.228 slat (nsec): min=6924, max=49139, avg=17818.05, stdev=7413.31 00:15:00.228 clat (usec): min=243, max=42052, avg=32752.61, stdev=16267.53 00:15:00.228 lat (usec): min=257, max=42068, avg=32770.45, stdev=16268.88 00:15:00.228 clat percentiles (usec): 00:15:00.228 | 1.00th=[ 351], 5.00th=[ 433], 10.00th=[ 498], 20.00th=[19268], 00:15:00.228 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:00.228 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:00.228 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:00.228 | 99.99th=[42206] 00:15:00.228 bw ( KiB/s): min= 96, max= 160, per=1.20%, avg=124.00, stdev=21.32, samples=6 00:15:00.228 iops : min= 24, max= 40, avg=31.00, stdev= 5.33, samples=6 00:15:00.228 lat (usec) : 250=0.98%, 500=11.76%, 750=6.86% 00:15:00.228 lat (msec) : 20=0.98%, 50=78.43% 00:15:00.228 cpu : usr=0.00%, sys=0.09%, ctx=104, majf=0, minf=2 00:15:00.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 issued rwts: total=102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.228 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650023: Tue Oct 8 20:42:28 2024 00:15:00.228 read: IOPS=2239, BW=8955KiB/s (9170kB/s)(25.7MiB/2940msec) 00:15:00.228 slat (nsec): min=4970, max=50219, avg=8389.30, stdev=4425.48 00:15:00.228 clat (usec): min=172, max=41195, avg=432.17, stdev=2878.77 00:15:00.228 lat (usec): min=178, max=41202, avg=440.56, stdev=2879.30 00:15:00.228 clat percentiles (usec): 00:15:00.228 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:15:00.228 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:15:00.228 | 70.00th=[ 233], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 306], 00:15:00.228 | 99.00th=[ 379], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:15:00.228 | 99.99th=[41157] 00:15:00.228 bw ( KiB/s): min= 96, max=13576, per=91.32%, avg=9448.00, stdev=5445.02, samples=5 00:15:00.228 iops : min= 24, max= 3394, avg=2362.00, stdev=1361.26, samples=5 00:15:00.228 lat (usec) : 250=81.25%, 500=18.20%, 750=0.02% 00:15:00.228 lat (msec) : 4=0.02%, 50=0.50% 00:15:00.228 cpu : usr=0.85%, sys=2.01%, ctx=6583, majf=0, minf=2 00:15:00.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.228 issued rwts: total=6583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.228 00:15:00.228 Run status group 0 (all jobs): 00:15:00.228 READ: bw=10.1MiB/s (10.6MB/s), 121KiB/s-8955KiB/s (124kB/s-9170kB/s), io=41.6MiB (43.6MB), run=2940-4116msec 00:15:00.228 00:15:00.228 Disk stats (read/write): 00:15:00.228 nvme0n1: ios=326/0, merge=0/0, ticks=3503/0, in_queue=3503, util=94.23% 00:15:00.228 nvme0n2: ios=3668/0, merge=0/0, ticks=4683/0, in_queue=4683, util=98.96% 00:15:00.228 nvme0n3: ios=144/0, merge=0/0, ticks=4024/0, in_queue=4024, util=99.56% 00:15:00.228 nvme0n4: ios=6577/0, merge=0/0, ticks=2699/0, in_queue=2699, util=96.64% 00:15:00.799 20:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:00.799 20:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:01.057 20:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:01.057 20:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:01.317 20:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:01.317 20:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:02.257 20:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:02.257 20:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:02.257 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:02.257 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1649927 00:15:02.257 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:02.257 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:02.517 nvmf hotplug test: fio failed as expected 00:15:02.517 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.086 rmmod nvme_tcp 00:15:03.086 rmmod nvme_fabrics 00:15:03.086 rmmod nvme_keyring 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1646967 ']' 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1646967 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1646967 ']' 00:15:03.086 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1646967 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1646967 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1646967' 00:15:03.345 killing process with pid 1646967 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1646967 00:15:03.345 20:42:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1646967 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.605 20:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:06.149 00:15:06.149 real 0m30.806s 00:15:06.149 user 1m51.292s 00:15:06.149 sys 0m8.022s 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.149 ************************************ 00:15:06.149 END TEST nvmf_fio_target 00:15:06.149 ************************************ 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:06.149 ************************************ 00:15:06.149 START TEST nvmf_bdevio 00:15:06.149 ************************************ 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:06.149 * Looking for test storage... 00:15:06.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.149 --rc genhtml_branch_coverage=1 00:15:06.149 --rc genhtml_function_coverage=1 00:15:06.149 --rc genhtml_legend=1 00:15:06.149 --rc geninfo_all_blocks=1 00:15:06.149 --rc geninfo_unexecuted_blocks=1 00:15:06.149 00:15:06.149 ' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.149 --rc genhtml_branch_coverage=1 00:15:06.149 --rc genhtml_function_coverage=1 00:15:06.149 --rc genhtml_legend=1 00:15:06.149 --rc geninfo_all_blocks=1 00:15:06.149 --rc geninfo_unexecuted_blocks=1 00:15:06.149 00:15:06.149 ' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.149 --rc genhtml_branch_coverage=1 00:15:06.149 --rc genhtml_function_coverage=1 00:15:06.149 --rc genhtml_legend=1 00:15:06.149 --rc geninfo_all_blocks=1 00:15:06.149 --rc geninfo_unexecuted_blocks=1 00:15:06.149 00:15:06.149 ' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.149 --rc genhtml_branch_coverage=1 00:15:06.149 --rc genhtml_function_coverage=1 00:15:06.149 --rc genhtml_legend=1 00:15:06.149 --rc geninfo_all_blocks=1 00:15:06.149 --rc geninfo_unexecuted_blocks=1 00:15:06.149 00:15:06.149 ' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.149 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:15:06.150 20:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:09.438 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:09.439 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:09.439 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:09.439 Found net devices under 0000:84:00.0: cvl_0_0 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:09.439 Found net devices under 0000:84:00.1: cvl_0_1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:09.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:15:09.439 00:15:09.439 --- 10.0.0.2 ping statistics --- 00:15:09.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.439 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:09.439 00:15:09.439 --- 10.0.0.1 ping statistics --- 00:15:09.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.439 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1652936 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1652936 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1652936 ']' 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.439 20:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.439 [2024-10-08 20:42:37.912425] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:15:09.439 [2024-10-08 20:42:37.912529] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.439 [2024-10-08 20:42:37.990887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.439 [2024-10-08 20:42:38.118041] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.439 [2024-10-08 20:42:38.118105] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.439 [2024-10-08 20:42:38.118120] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.439 [2024-10-08 20:42:38.118132] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.439 [2024-10-08 20:42:38.118142] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.439 [2024-10-08 20:42:38.120025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:15:09.439 [2024-10-08 20:42:38.120086] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:15:09.439 [2024-10-08 20:42:38.120120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:15:09.439 [2024-10-08 20:42:38.120123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 [2024-10-08 20:42:38.345380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 Malloc0 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:09.699 [2024-10-08 20:42:38.399892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:15:09.699 { 00:15:09.699 "params": { 00:15:09.699 "name": "Nvme$subsystem", 00:15:09.699 "trtype": "$TEST_TRANSPORT", 00:15:09.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.699 "adrfam": "ipv4", 00:15:09.699 "trsvcid": "$NVMF_PORT", 00:15:09.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.699 "hdgst": ${hdgst:-false}, 00:15:09.699 "ddgst": ${ddgst:-false} 00:15:09.699 }, 00:15:09.699 "method": "bdev_nvme_attach_controller" 00:15:09.699 } 00:15:09.699 EOF 00:15:09.699 )") 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:15:09.699 20:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:15:09.699 "params": { 00:15:09.699 "name": "Nvme1", 00:15:09.699 "trtype": "tcp", 00:15:09.699 "traddr": "10.0.0.2", 00:15:09.699 "adrfam": "ipv4", 00:15:09.699 "trsvcid": "4420", 00:15:09.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.699 "hdgst": false, 00:15:09.699 "ddgst": false 00:15:09.699 }, 00:15:09.699 "method": "bdev_nvme_attach_controller" 00:15:09.699 }' 00:15:09.699 [2024-10-08 20:42:38.460045] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:15:09.700 [2024-10-08 20:42:38.460204] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653090 ] 00:15:09.959 [2024-10-08 20:42:38.557537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.959 [2024-10-08 20:42:38.675132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.959 [2024-10-08 20:42:38.675183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.959 [2024-10-08 20:42:38.675187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.527 I/O targets: 00:15:10.527 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:10.527 00:15:10.527 00:15:10.527 CUnit - A unit testing framework for C - Version 2.1-3 00:15:10.527 http://cunit.sourceforge.net/ 00:15:10.527 00:15:10.527 00:15:10.527 Suite: bdevio tests on: Nvme1n1 00:15:10.527 Test: blockdev write read block ...passed 00:15:10.527 Test: blockdev write zeroes read block ...passed 00:15:10.527 Test: blockdev write zeroes read no split ...passed 00:15:10.528 Test: blockdev write zeroes read split ...passed 00:15:10.528 Test: blockdev write zeroes read split partial ...passed 00:15:10.528 Test: blockdev reset ...[2024-10-08 20:42:39.218630] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:10.528 [2024-10-08 20:42:39.218754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x881f40 (9): Bad file descriptor 00:15:10.528 [2024-10-08 20:42:39.232493] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:10.528 passed 00:15:10.528 Test: blockdev write read 8 blocks ...passed 00:15:10.528 Test: blockdev write read size > 128k ...passed 00:15:10.528 Test: blockdev write read invalid size ...passed 00:15:10.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.528 Test: blockdev write read max offset ...passed 00:15:10.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.786 Test: blockdev writev readv 8 blocks ...passed 00:15:10.786 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.786 Test: blockdev writev readv block ...passed 00:15:10.786 Test: blockdev writev readv size > 128k ...passed 00:15:10.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.786 Test: blockdev comparev and writev ...[2024-10-08 20:42:39.406481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.406518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.406542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.406560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.406944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.406992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.407008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.407392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.407416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.407438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.407454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.407831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.407877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:10.786 [2024-10-08 20:42:39.407894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:10.786 passed 00:15:10.786 Test: blockdev nvme passthru rw ...passed 00:15:10.786 Test: blockdev nvme passthru vendor specific ...[2024-10-08 20:42:39.489959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:10.786 [2024-10-08 20:42:39.489997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:10.786 [2024-10-08 20:42:39.490141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:10.787 [2024-10-08 20:42:39.490163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:10.787 [2024-10-08 20:42:39.490306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:10.787 [2024-10-08 20:42:39.490329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:10.787 [2024-10-08 20:42:39.490468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:10.787 [2024-10-08 20:42:39.490490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:10.787 passed 00:15:10.787 Test: blockdev nvme admin passthru ...passed 00:15:11.046 Test: blockdev copy ...passed 00:15:11.046 00:15:11.046 Run Summary: Type Total Ran Passed Failed Inactive 00:15:11.046 suites 1 1 n/a 0 0 00:15:11.046 tests 23 23 23 0 0 00:15:11.046 asserts 152 152 152 0 n/a 00:15:11.046 00:15:11.046 Elapsed time = 1.050 seconds 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.046 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.305 rmmod nvme_tcp 00:15:11.305 rmmod nvme_fabrics 00:15:11.305 rmmod nvme_keyring 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1652936 ']' 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1652936 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1652936 ']' 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1652936 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1652936 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1652936' 00:15:11.305 killing process with pid 1652936 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1652936 00:15:11.305 20:42:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1652936 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.563 20:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:14.097 00:15:14.097 real 0m7.866s 00:15:14.097 user 0m12.099s 00:15:14.097 sys 0m3.066s 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:14.097 ************************************ 00:15:14.097 END TEST nvmf_bdevio 00:15:14.097 ************************************ 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:14.097 00:15:14.097 real 4m50.081s 00:15:14.097 user 12m13.151s 00:15:14.097 sys 1m25.087s 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:14.097 ************************************ 00:15:14.097 END TEST nvmf_target_core 00:15:14.097 ************************************ 00:15:14.097 20:42:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:14.097 20:42:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.097 20:42:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.097 20:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.097 ************************************ 00:15:14.097 START TEST nvmf_target_extra 00:15:14.097 ************************************ 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:14.097 * Looking for test storage... 00:15:14.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.097 --rc genhtml_branch_coverage=1 00:15:14.097 --rc genhtml_function_coverage=1 00:15:14.097 --rc genhtml_legend=1 00:15:14.097 --rc geninfo_all_blocks=1 00:15:14.097 --rc geninfo_unexecuted_blocks=1 00:15:14.097 00:15:14.097 ' 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.097 --rc genhtml_branch_coverage=1 00:15:14.097 --rc genhtml_function_coverage=1 00:15:14.097 --rc genhtml_legend=1 00:15:14.097 --rc geninfo_all_blocks=1 00:15:14.097 --rc geninfo_unexecuted_blocks=1 00:15:14.097 00:15:14.097 ' 00:15:14.097 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.097 --rc genhtml_branch_coverage=1 00:15:14.097 --rc genhtml_function_coverage=1 00:15:14.097 --rc genhtml_legend=1 00:15:14.097 --rc geninfo_all_blocks=1 00:15:14.098 --rc geninfo_unexecuted_blocks=1 00:15:14.098 00:15:14.098 ' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.098 --rc genhtml_branch_coverage=1 00:15:14.098 --rc genhtml_function_coverage=1 00:15:14.098 --rc genhtml_legend=1 00:15:14.098 --rc geninfo_all_blocks=1 00:15:14.098 --rc geninfo_unexecuted_blocks=1 00:15:14.098 00:15:14.098 ' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.098 ************************************ 00:15:14.098 START TEST nvmf_example 00:15:14.098 ************************************ 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:14.098 * Looking for test storage... 00:15:14.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.098 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.358 --rc genhtml_branch_coverage=1 00:15:14.358 --rc genhtml_function_coverage=1 00:15:14.358 --rc genhtml_legend=1 00:15:14.358 --rc geninfo_all_blocks=1 00:15:14.358 --rc geninfo_unexecuted_blocks=1 00:15:14.358 00:15:14.358 ' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.358 --rc genhtml_branch_coverage=1 00:15:14.358 --rc genhtml_function_coverage=1 00:15:14.358 --rc genhtml_legend=1 00:15:14.358 --rc geninfo_all_blocks=1 00:15:14.358 --rc geninfo_unexecuted_blocks=1 00:15:14.358 00:15:14.358 ' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.358 --rc genhtml_branch_coverage=1 00:15:14.358 --rc genhtml_function_coverage=1 00:15:14.358 --rc genhtml_legend=1 00:15:14.358 --rc geninfo_all_blocks=1 00:15:14.358 --rc geninfo_unexecuted_blocks=1 00:15:14.358 00:15:14.358 ' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.358 --rc genhtml_branch_coverage=1 00:15:14.358 --rc genhtml_function_coverage=1 00:15:14.358 --rc genhtml_legend=1 00:15:14.358 --rc geninfo_all_blocks=1 00:15:14.358 --rc geninfo_unexecuted_blocks=1 00:15:14.358 00:15:14.358 ' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.358 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.359 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.891 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:16.892 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:16.892 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:16.892 Found net devices under 0000:84:00.0: cvl_0_0 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:16.892 Found net devices under 0000:84:00.1: cvl_0_1 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.892 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:17.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:15:17.152 00:15:17.152 --- 10.0.0.2 ping statistics --- 00:15:17.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.152 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:15:17.152 00:15:17.152 --- 10.0.0.1 ping statistics --- 00:15:17.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.152 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1655371 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1655371 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1655371 ']' 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.152 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:17.718 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:29.925 Initializing NVMe Controllers 00:15:29.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.925 Initialization complete. Launching workers. 00:15:29.925 ======================================================== 00:15:29.925 Latency(us) 00:15:29.925 Device Information : IOPS MiB/s Average min max 00:15:29.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14840.22 57.97 4313.75 868.50 15278.07 00:15:29.925 ======================================================== 00:15:29.925 Total : 14840.22 57.97 4313.75 868.50 15278.07 00:15:29.925 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.925 rmmod nvme_tcp 00:15:29.925 rmmod nvme_fabrics 00:15:29.925 rmmod nvme_keyring 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1655371 ']' 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1655371 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1655371 ']' 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1655371 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1655371 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1655371' 00:15:29.925 killing process with pid 1655371 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1655371 00:15:29.925 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1655371 00:15:29.925 nvmf threads initialize successfully 00:15:29.925 bdev subsystem init successfully 00:15:29.925 created a nvmf target service 00:15:29.925 create targets's poll groups done 00:15:29.925 all subsystems of target started 00:15:29.925 nvmf target is running 00:15:29.925 all subsystems of target stopped 00:15:29.925 destroy targets's poll groups done 00:15:29.926 destroyed the nvmf target service 00:15:29.926 bdev subsystem finish successfully 00:15:29.926 nvmf threads destroy successfully 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.926 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 00:15:30.913 real 0m16.719s 00:15:30.913 user 0m43.751s 00:15:30.913 sys 0m4.323s 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 ************************************ 00:15:30.913 END TEST nvmf_example 00:15:30.913 ************************************ 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 ************************************ 00:15:30.913 START TEST nvmf_filesystem 00:15:30.913 ************************************ 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:30.913 * Looking for test storage... 00:15:30.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.913 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:30.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.913 --rc genhtml_branch_coverage=1 00:15:30.913 --rc genhtml_function_coverage=1 00:15:30.913 --rc genhtml_legend=1 00:15:30.913 --rc geninfo_all_blocks=1 00:15:30.914 --rc geninfo_unexecuted_blocks=1 00:15:30.914 00:15:30.914 ' 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.914 --rc genhtml_branch_coverage=1 00:15:30.914 --rc genhtml_function_coverage=1 00:15:30.914 --rc genhtml_legend=1 00:15:30.914 --rc geninfo_all_blocks=1 00:15:30.914 --rc geninfo_unexecuted_blocks=1 00:15:30.914 00:15:30.914 ' 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.914 --rc genhtml_branch_coverage=1 00:15:30.914 --rc genhtml_function_coverage=1 00:15:30.914 --rc genhtml_legend=1 00:15:30.914 --rc geninfo_all_blocks=1 00:15:30.914 --rc geninfo_unexecuted_blocks=1 00:15:30.914 00:15:30.914 ' 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.914 --rc genhtml_branch_coverage=1 00:15:30.914 --rc genhtml_function_coverage=1 00:15:30.914 --rc genhtml_legend=1 00:15:30.914 --rc geninfo_all_blocks=1 00:15:30.914 --rc geninfo_unexecuted_blocks=1 00:15:30.914 00:15:30.914 ' 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:30.914 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:31.177 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:31.178 #define SPDK_CONFIG_H 00:15:31.178 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:31.178 #define SPDK_CONFIG_APPS 1 00:15:31.178 #define SPDK_CONFIG_ARCH native 00:15:31.178 #undef SPDK_CONFIG_ASAN 00:15:31.178 #undef SPDK_CONFIG_AVAHI 00:15:31.178 #undef SPDK_CONFIG_CET 00:15:31.178 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:31.178 #define SPDK_CONFIG_COVERAGE 1 00:15:31.178 #define SPDK_CONFIG_CROSS_PREFIX 00:15:31.178 #undef SPDK_CONFIG_CRYPTO 00:15:31.178 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:31.178 #undef SPDK_CONFIG_CUSTOMOCF 00:15:31.178 #undef SPDK_CONFIG_DAOS 00:15:31.178 #define SPDK_CONFIG_DAOS_DIR 00:15:31.178 #define SPDK_CONFIG_DEBUG 1 00:15:31.178 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:31.178 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:31.178 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:31.178 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:31.178 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:31.178 #undef SPDK_CONFIG_DPDK_UADK 00:15:31.178 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:31.178 #define SPDK_CONFIG_EXAMPLES 1 00:15:31.178 #undef SPDK_CONFIG_FC 00:15:31.178 #define SPDK_CONFIG_FC_PATH 00:15:31.178 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:31.178 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:31.178 #define SPDK_CONFIG_FSDEV 1 00:15:31.178 #undef SPDK_CONFIG_FUSE 00:15:31.178 #undef SPDK_CONFIG_FUZZER 00:15:31.178 #define SPDK_CONFIG_FUZZER_LIB 00:15:31.178 #undef SPDK_CONFIG_GOLANG 00:15:31.178 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:31.178 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:31.178 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:31.178 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:31.178 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:31.178 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:31.178 #undef SPDK_CONFIG_HAVE_LZ4 00:15:31.178 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:31.178 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:31.178 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:31.178 #define SPDK_CONFIG_IDXD 1 00:15:31.178 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:31.178 #undef SPDK_CONFIG_IPSEC_MB 00:15:31.178 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:31.178 #define SPDK_CONFIG_ISAL 1 00:15:31.178 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:31.178 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:31.178 #define SPDK_CONFIG_LIBDIR 00:15:31.178 #undef SPDK_CONFIG_LTO 00:15:31.178 #define SPDK_CONFIG_MAX_LCORES 128 00:15:31.178 #define SPDK_CONFIG_NVME_CUSE 1 00:15:31.178 #undef SPDK_CONFIG_OCF 00:15:31.178 #define SPDK_CONFIG_OCF_PATH 00:15:31.178 #define SPDK_CONFIG_OPENSSL_PATH 00:15:31.178 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:31.178 #define SPDK_CONFIG_PGO_DIR 00:15:31.178 #undef SPDK_CONFIG_PGO_USE 00:15:31.178 #define SPDK_CONFIG_PREFIX /usr/local 00:15:31.178 #undef SPDK_CONFIG_RAID5F 00:15:31.178 #undef SPDK_CONFIG_RBD 00:15:31.178 #define SPDK_CONFIG_RDMA 1 00:15:31.178 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:31.178 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:31.178 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:31.178 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:31.178 #define SPDK_CONFIG_SHARED 1 00:15:31.178 #undef SPDK_CONFIG_SMA 00:15:31.178 #define SPDK_CONFIG_TESTS 1 00:15:31.178 #undef SPDK_CONFIG_TSAN 00:15:31.178 #define SPDK_CONFIG_UBLK 1 00:15:31.178 #define SPDK_CONFIG_UBSAN 1 00:15:31.178 #undef SPDK_CONFIG_UNIT_TESTS 00:15:31.178 #undef SPDK_CONFIG_URING 00:15:31.178 #define SPDK_CONFIG_URING_PATH 00:15:31.178 #undef SPDK_CONFIG_URING_ZNS 00:15:31.178 #undef SPDK_CONFIG_USDT 00:15:31.178 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:31.178 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:31.178 #define SPDK_CONFIG_VFIO_USER 1 00:15:31.178 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:31.178 #define SPDK_CONFIG_VHOST 1 00:15:31.178 #define SPDK_CONFIG_VIRTIO 1 00:15:31.178 #undef SPDK_CONFIG_VTUNE 00:15:31.178 #define SPDK_CONFIG_VTUNE_DIR 00:15:31.178 #define SPDK_CONFIG_WERROR 1 00:15:31.178 #define SPDK_CONFIG_WPDK_DIR 00:15:31.178 #undef SPDK_CONFIG_XNVME 00:15:31.178 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:31.178 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:31.179 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1657053 ]] 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1657053 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.eXDqLE 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:31.180 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eXDqLE/tests/target /tmp/spdk.eXDqLE 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=660762624 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623667200 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39080583168 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=45077078016 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5996494848 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22528507904 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=8992956416 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9015418880 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22462464 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22538051584 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=487424 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4507693056 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4507705344 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:15:31.181 * Looking for test storage... 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=39080583168 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8211087360 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:15:31.181 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:31.441 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:31.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.442 --rc genhtml_branch_coverage=1 00:15:31.442 --rc genhtml_function_coverage=1 00:15:31.442 --rc genhtml_legend=1 00:15:31.442 --rc geninfo_all_blocks=1 00:15:31.442 --rc geninfo_unexecuted_blocks=1 00:15:31.442 00:15:31.442 ' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:31.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.442 --rc genhtml_branch_coverage=1 00:15:31.442 --rc genhtml_function_coverage=1 00:15:31.442 --rc genhtml_legend=1 00:15:31.442 --rc geninfo_all_blocks=1 00:15:31.442 --rc geninfo_unexecuted_blocks=1 00:15:31.442 00:15:31.442 ' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:31.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.442 --rc genhtml_branch_coverage=1 00:15:31.442 --rc genhtml_function_coverage=1 00:15:31.442 --rc genhtml_legend=1 00:15:31.442 --rc geninfo_all_blocks=1 00:15:31.442 --rc geninfo_unexecuted_blocks=1 00:15:31.442 00:15:31.442 ' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:31.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.442 --rc genhtml_branch_coverage=1 00:15:31.442 --rc genhtml_function_coverage=1 00:15:31.442 --rc genhtml_legend=1 00:15:31.442 --rc geninfo_all_blocks=1 00:15:31.442 --rc geninfo_unexecuted_blocks=1 00:15:31.442 00:15:31.442 ' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:31.442 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.443 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.731 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.731 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:34.731 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:34.731 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:34.732 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:34.732 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:34.732 Found net devices under 0000:84:00.0: cvl_0_0 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:34.732 Found net devices under 0000:84:00.1: cvl_0_1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:34.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:15:34.732 00:15:34.732 --- 10.0.0.2 ping statistics --- 00:15:34.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.732 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:15:34.732 00:15:34.732 --- 10.0.0.1 ping statistics --- 00:15:34.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.732 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:34.732 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.733 ************************************ 00:15:34.733 START TEST nvmf_filesystem_no_in_capsule 00:15:34.733 ************************************ 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1658845 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1658845 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1658845 ']' 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.733 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:34.733 [2024-10-08 20:43:03.428019] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:15:34.733 [2024-10-08 20:43:03.428112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.995 [2024-10-08 20:43:03.544335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.255 [2024-10-08 20:43:03.769276] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.255 [2024-10-08 20:43:03.769410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.255 [2024-10-08 20:43:03.769464] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.255 [2024-10-08 20:43:03.769510] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.255 [2024-10-08 20:43:03.769550] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.255 [2024-10-08 20:43:03.773379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.255 [2024-10-08 20:43:03.773479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.255 [2024-10-08 20:43:03.773565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.255 [2024-10-08 20:43:03.773569] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.255 [2024-10-08 20:43:03.947224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.255 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.515 Malloc1 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.515 [2024-10-08 20:43:04.134954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.515 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:35.515 { 00:15:35.515 "name": "Malloc1", 00:15:35.515 "aliases": [ 00:15:35.515 "810a455e-5cd5-4e7b-9190-a4c17439d502" 00:15:35.515 ], 00:15:35.515 "product_name": "Malloc disk", 00:15:35.515 "block_size": 512, 00:15:35.515 "num_blocks": 1048576, 00:15:35.515 "uuid": "810a455e-5cd5-4e7b-9190-a4c17439d502", 00:15:35.515 "assigned_rate_limits": { 00:15:35.515 "rw_ios_per_sec": 0, 00:15:35.515 "rw_mbytes_per_sec": 0, 00:15:35.515 "r_mbytes_per_sec": 0, 00:15:35.515 "w_mbytes_per_sec": 0 00:15:35.515 }, 00:15:35.515 "claimed": true, 00:15:35.515 "claim_type": "exclusive_write", 00:15:35.515 "zoned": false, 00:15:35.515 "supported_io_types": { 00:15:35.515 "read": true, 00:15:35.515 "write": true, 00:15:35.515 "unmap": true, 00:15:35.515 "flush": true, 00:15:35.515 "reset": true, 00:15:35.515 "nvme_admin": false, 00:15:35.516 "nvme_io": false, 00:15:35.516 "nvme_io_md": false, 00:15:35.516 "write_zeroes": true, 00:15:35.516 "zcopy": true, 00:15:35.516 "get_zone_info": false, 00:15:35.516 "zone_management": false, 00:15:35.516 "zone_append": false, 00:15:35.516 "compare": false, 00:15:35.516 "compare_and_write": false, 00:15:35.516 "abort": true, 00:15:35.516 "seek_hole": false, 00:15:35.516 "seek_data": false, 00:15:35.516 "copy": true, 00:15:35.516 "nvme_iov_md": false 00:15:35.516 }, 00:15:35.516 "memory_domains": [ 00:15:35.516 { 00:15:35.516 "dma_device_id": "system", 00:15:35.516 "dma_device_type": 1 00:15:35.516 }, 00:15:35.516 { 00:15:35.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.516 "dma_device_type": 2 00:15:35.516 } 00:15:35.516 ], 00:15:35.516 "driver_specific": {} 00:15:35.516 } 00:15:35.516 ]' 00:15:35.516 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:35.516 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:35.516 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:35.774 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:35.774 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:35.774 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:35.774 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:35.774 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:36.343 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.343 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:36.343 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.343 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:36.343 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:38.252 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:38.252 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:38.252 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:38.511 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:38.771 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:39.708 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:39.709 ************************************ 00:15:39.709 START TEST filesystem_ext4 00:15:39.709 ************************************ 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:15:39.709 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:39.709 mke2fs 1.47.0 (5-Feb-2023) 00:15:39.968 Discarding device blocks: 0/522240 done 00:15:39.968 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:39.968 Filesystem UUID: ae0a0fc6-667a-4484-a99f-1427ddaf2cda 00:15:39.968 Superblock backups stored on blocks: 00:15:39.968 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:39.968 00:15:39.968 Allocating group tables: 0/64 done 00:15:39.968 Writing inode tables: 0/64 done 00:15:43.257 Creating journal (8192 blocks): done 00:15:45.022 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:15:45.022 00:15:45.022 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:45.022 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1658845 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:51.589 00:15:51.589 real 0m11.086s 00:15:51.589 user 0m0.030s 00:15:51.589 sys 0m0.069s 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:51.589 ************************************ 00:15:51.589 END TEST filesystem_ext4 00:15:51.589 ************************************ 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.589 ************************************ 00:15:51.589 START TEST filesystem_btrfs 00:15:51.589 ************************************ 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:51.589 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:51.589 btrfs-progs v6.8.1 00:15:51.589 See https://btrfs.readthedocs.io for more information. 00:15:51.589 00:15:51.589 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:51.589 NOTE: several default settings have changed in version 5.15, please make sure 00:15:51.589 this does not affect your deployments: 00:15:51.589 - DUP for metadata (-m dup) 00:15:51.589 - enabled no-holes (-O no-holes) 00:15:51.589 - enabled free-space-tree (-R free-space-tree) 00:15:51.589 00:15:51.589 Label: (null) 00:15:51.589 UUID: 48f5038e-50a0-44a2-b5d1-696230c933fb 00:15:51.589 Node size: 16384 00:15:51.589 Sector size: 4096 (CPU page size: 4096) 00:15:51.589 Filesystem size: 510.00MiB 00:15:51.589 Block group profiles: 00:15:51.589 Data: single 8.00MiB 00:15:51.589 Metadata: DUP 32.00MiB 00:15:51.590 System: DUP 8.00MiB 00:15:51.590 SSD detected: yes 00:15:51.590 Zoned device: no 00:15:51.590 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:51.590 Checksum: crc32c 00:15:51.590 Number of devices: 1 00:15:51.590 Devices: 00:15:51.590 ID SIZE PATH 00:15:51.590 1 510.00MiB /dev/nvme0n1p1 00:15:51.590 00:15:51.590 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:51.590 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1658845 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:51.590 00:15:51.590 real 0m0.542s 00:15:51.590 user 0m0.024s 00:15:51.590 sys 0m0.101s 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:51.590 ************************************ 00:15:51.590 END TEST filesystem_btrfs 00:15:51.590 ************************************ 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.590 ************************************ 00:15:51.590 START TEST filesystem_xfs 00:15:51.590 ************************************ 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:51.590 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:51.590 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:51.590 = sectsz=512 attr=2, projid32bit=1 00:15:51.590 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:51.590 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:51.590 data = bsize=4096 blocks=130560, imaxpct=25 00:15:51.590 = sunit=0 swidth=0 blks 00:15:51.590 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:51.590 log =internal log bsize=4096 blocks=16384, version=2 00:15:51.590 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:51.590 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:52.963 Discarding blocks...Done. 00:15:52.963 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:52.963 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1658845 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:54.860 00:15:54.860 real 0m3.091s 00:15:54.860 user 0m0.017s 00:15:54.860 sys 0m0.078s 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:54.860 ************************************ 00:15:54.860 END TEST filesystem_xfs 00:15:54.860 ************************************ 00:15:54.860 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1658845 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1658845 ']' 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1658845 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658845 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658845' 00:15:55.119 killing process with pid 1658845 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1658845 00:15:55.119 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1658845 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:55.684 00:15:55.684 real 0m21.017s 00:15:55.684 user 1m20.615s 00:15:55.684 sys 0m2.627s 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.684 ************************************ 00:15:55.684 END TEST nvmf_filesystem_no_in_capsule 00:15:55.684 ************************************ 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:55.684 ************************************ 00:15:55.684 START TEST nvmf_filesystem_in_capsule 00:15:55.684 ************************************ 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.684 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1661468 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1661468 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1661468 ']' 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.943 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.943 [2024-10-08 20:43:24.511101] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:15:55.943 [2024-10-08 20:43:24.511218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.943 [2024-10-08 20:43:24.633619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.201 [2024-10-08 20:43:24.864900] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.201 [2024-10-08 20:43:24.865027] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.201 [2024-10-08 20:43:24.865088] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.201 [2024-10-08 20:43:24.865137] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.201 [2024-10-08 20:43:24.865179] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.201 [2024-10-08 20:43:24.869007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.201 [2024-10-08 20:43:24.871156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.201 [2024-10-08 20:43:24.871262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.201 [2024-10-08 20:43:24.871266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.458 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.458 [2024-10-08 20:43:25.046319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.459 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.459 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:56.459 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.459 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 Malloc1 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 [2024-10-08 20:43:25.241467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:56.715 { 00:15:56.715 "name": "Malloc1", 00:15:56.715 "aliases": [ 00:15:56.715 "f8034e76-c9c1-4ff5-9e80-16f266118a51" 00:15:56.715 ], 00:15:56.715 "product_name": "Malloc disk", 00:15:56.715 "block_size": 512, 00:15:56.715 "num_blocks": 1048576, 00:15:56.715 "uuid": "f8034e76-c9c1-4ff5-9e80-16f266118a51", 00:15:56.715 "assigned_rate_limits": { 00:15:56.715 "rw_ios_per_sec": 0, 00:15:56.715 "rw_mbytes_per_sec": 0, 00:15:56.715 "r_mbytes_per_sec": 0, 00:15:56.715 "w_mbytes_per_sec": 0 00:15:56.715 }, 00:15:56.715 "claimed": true, 00:15:56.715 "claim_type": "exclusive_write", 00:15:56.715 "zoned": false, 00:15:56.715 "supported_io_types": { 00:15:56.715 "read": true, 00:15:56.715 "write": true, 00:15:56.715 "unmap": true, 00:15:56.715 "flush": true, 00:15:56.715 "reset": true, 00:15:56.715 "nvme_admin": false, 00:15:56.715 "nvme_io": false, 00:15:56.715 "nvme_io_md": false, 00:15:56.715 "write_zeroes": true, 00:15:56.715 "zcopy": true, 00:15:56.715 "get_zone_info": false, 00:15:56.715 "zone_management": false, 00:15:56.715 "zone_append": false, 00:15:56.715 "compare": false, 00:15:56.715 "compare_and_write": false, 00:15:56.715 "abort": true, 00:15:56.715 "seek_hole": false, 00:15:56.715 "seek_data": false, 00:15:56.715 "copy": true, 00:15:56.715 "nvme_iov_md": false 00:15:56.715 }, 00:15:56.715 "memory_domains": [ 00:15:56.715 { 00:15:56.715 "dma_device_id": "system", 00:15:56.715 "dma_device_type": 1 00:15:56.715 }, 00:15:56.715 { 00:15:56.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.715 "dma_device_type": 2 00:15:56.715 } 00:15:56.715 ], 00:15:56.715 "driver_specific": {} 00:15:56.715 } 00:15:56.715 ]' 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:56.715 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.280 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.280 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.280 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.280 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:57.280 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:59.807 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:00.374 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.309 ************************************ 00:16:01.309 START TEST filesystem_in_capsule_ext4 00:16:01.309 ************************************ 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:16:01.309 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:01.309 mke2fs 1.47.0 (5-Feb-2023) 00:16:01.309 Discarding device blocks: 0/522240 done 00:16:01.568 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:01.568 Filesystem UUID: 897caacc-7395-4ece-956a-0d8e577f7a52 00:16:01.568 Superblock backups stored on blocks: 00:16:01.568 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:01.568 00:16:01.568 Allocating group tables: 0/64 done 00:16:01.568 Writing inode tables: 0/64 done 00:16:04.106 Creating journal (8192 blocks): done 00:16:04.365 Writing superblocks and filesystem accounting information: 0/64 done 00:16:04.365 00:16:04.365 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:16:04.365 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1661468 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:10.935 00:16:10.935 real 0m8.732s 00:16:10.935 user 0m0.021s 00:16:10.935 sys 0m0.072s 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:10.935 ************************************ 00:16:10.935 END TEST filesystem_in_capsule_ext4 00:16:10.935 ************************************ 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:10.935 ************************************ 00:16:10.935 START TEST filesystem_in_capsule_btrfs 00:16:10.935 ************************************ 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:10.935 btrfs-progs v6.8.1 00:16:10.935 See https://btrfs.readthedocs.io for more information. 00:16:10.935 00:16:10.935 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:10.935 NOTE: several default settings have changed in version 5.15, please make sure 00:16:10.935 this does not affect your deployments: 00:16:10.935 - DUP for metadata (-m dup) 00:16:10.935 - enabled no-holes (-O no-holes) 00:16:10.935 - enabled free-space-tree (-R free-space-tree) 00:16:10.935 00:16:10.935 Label: (null) 00:16:10.935 UUID: 43f009b6-6659-4d6c-9c07-4da6bc120c58 00:16:10.935 Node size: 16384 00:16:10.935 Sector size: 4096 (CPU page size: 4096) 00:16:10.935 Filesystem size: 510.00MiB 00:16:10.935 Block group profiles: 00:16:10.935 Data: single 8.00MiB 00:16:10.935 Metadata: DUP 32.00MiB 00:16:10.935 System: DUP 8.00MiB 00:16:10.935 SSD detected: yes 00:16:10.935 Zoned device: no 00:16:10.935 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:10.935 Checksum: crc32c 00:16:10.935 Number of devices: 1 00:16:10.935 Devices: 00:16:10.935 ID SIZE PATH 00:16:10.935 1 510.00MiB /dev/nvme0n1p1 00:16:10.935 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:16:10.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1661468 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:10.935 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:10.936 00:16:10.936 real 0m0.479s 00:16:10.936 user 0m0.019s 00:16:10.936 sys 0m0.109s 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:10.936 ************************************ 00:16:10.936 END TEST filesystem_in_capsule_btrfs 00:16:10.936 ************************************ 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:10.936 ************************************ 00:16:10.936 START TEST filesystem_in_capsule_xfs 00:16:10.936 ************************************ 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:16:10.936 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:10.936 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:10.936 = sectsz=512 attr=2, projid32bit=1 00:16:10.936 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:10.936 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:10.936 data = bsize=4096 blocks=130560, imaxpct=25 00:16:10.936 = sunit=0 swidth=0 blks 00:16:10.936 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:10.936 log =internal log bsize=4096 blocks=16384, version=2 00:16:10.936 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:10.936 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:11.505 Discarding blocks...Done. 00:16:11.505 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:16:11.505 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1661468 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:14.042 00:16:14.042 real 0m3.279s 00:16:14.042 user 0m0.017s 00:16:14.042 sys 0m0.060s 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.042 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:14.042 ************************************ 00:16:14.042 END TEST filesystem_in_capsule_xfs 00:16:14.043 ************************************ 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1661468 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1661468 ']' 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1661468 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1661468 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1661468' 00:16:14.043 killing process with pid 1661468 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1661468 00:16:14.043 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1661468 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:14.647 00:16:14.647 real 0m18.905s 00:16:14.647 user 1m12.070s 00:16:14.647 sys 0m2.557s 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.647 ************************************ 00:16:14.647 END TEST nvmf_filesystem_in_capsule 00:16:14.647 ************************************ 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.647 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:14.647 rmmod nvme_tcp 00:16:14.921 rmmod nvme_fabrics 00:16:14.921 rmmod nvme_keyring 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:14.921 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.922 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:16.835 00:16:16.835 real 0m46.067s 00:16:16.835 user 2m34.153s 00:16:16.835 sys 0m7.900s 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:16.835 ************************************ 00:16:16.835 END TEST nvmf_filesystem 00:16:16.835 ************************************ 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.835 20:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.094 ************************************ 00:16:17.094 START TEST nvmf_target_discovery 00:16:17.094 ************************************ 00:16:17.094 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:17.094 * Looking for test storage... 00:16:17.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.094 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:17.094 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:16:17.094 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.353 --rc genhtml_branch_coverage=1 00:16:17.353 --rc genhtml_function_coverage=1 00:16:17.353 --rc genhtml_legend=1 00:16:17.353 --rc geninfo_all_blocks=1 00:16:17.353 --rc geninfo_unexecuted_blocks=1 00:16:17.353 00:16:17.353 ' 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.353 --rc genhtml_branch_coverage=1 00:16:17.353 --rc genhtml_function_coverage=1 00:16:17.353 --rc genhtml_legend=1 00:16:17.353 --rc geninfo_all_blocks=1 00:16:17.353 --rc geninfo_unexecuted_blocks=1 00:16:17.353 00:16:17.353 ' 00:16:17.353 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.353 --rc genhtml_branch_coverage=1 00:16:17.353 --rc genhtml_function_coverage=1 00:16:17.353 --rc genhtml_legend=1 00:16:17.353 --rc geninfo_all_blocks=1 00:16:17.353 --rc geninfo_unexecuted_blocks=1 00:16:17.353 00:16:17.353 ' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:17.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.354 --rc genhtml_branch_coverage=1 00:16:17.354 --rc genhtml_function_coverage=1 00:16:17.354 --rc genhtml_legend=1 00:16:17.354 --rc geninfo_all_blocks=1 00:16:17.354 --rc geninfo_unexecuted_blocks=1 00:16:17.354 00:16:17.354 ' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:17.354 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.641 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:20.642 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:20.642 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:20.642 Found net devices under 0000:84:00.0: cvl_0_0 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:20.642 Found net devices under 0000:84:00.1: cvl_0_1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:16:20.642 00:16:20.642 --- 10.0.0.2 ping statistics --- 00:16:20.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.642 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:16:20.642 00:16:20.642 --- 10.0.0.1 ping statistics --- 00:16:20.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.642 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:20.642 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1665906 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1665906 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1665906 ']' 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.642 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.643 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.643 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.643 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.643 [2024-10-08 20:43:49.065461] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:16:20.643 [2024-10-08 20:43:49.065558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.643 [2024-10-08 20:43:49.181040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.901 [2024-10-08 20:43:49.404166] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.901 [2024-10-08 20:43:49.404282] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.901 [2024-10-08 20:43:49.404341] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.901 [2024-10-08 20:43:49.404388] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.901 [2024-10-08 20:43:49.404433] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.902 [2024-10-08 20:43:49.408199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.902 [2024-10-08 20:43:49.408305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.902 [2024-10-08 20:43:49.408394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.902 [2024-10-08 20:43:49.408397] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 [2024-10-08 20:43:49.582441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 Null1 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 [2024-10-08 20:43:49.622770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 Null2 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.902 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 Null3 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 Null4 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.161 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.162 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:21.421 00:16:21.421 Discovery Log Number of Records 6, Generation counter 6 00:16:21.421 =====Discovery Log Entry 0====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: current discovery subsystem 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4420 00:16:21.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: explicit discovery connections, duplicate discovery information 00:16:21.421 sectype: none 00:16:21.421 =====Discovery Log Entry 1====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: nvme subsystem 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4420 00:16:21.421 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: none 00:16:21.421 sectype: none 00:16:21.421 =====Discovery Log Entry 2====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: nvme subsystem 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4420 00:16:21.421 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: none 00:16:21.421 sectype: none 00:16:21.421 =====Discovery Log Entry 3====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: nvme subsystem 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4420 00:16:21.421 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: none 00:16:21.421 sectype: none 00:16:21.421 =====Discovery Log Entry 4====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: nvme subsystem 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4420 00:16:21.421 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: none 00:16:21.421 sectype: none 00:16:21.421 =====Discovery Log Entry 5====== 00:16:21.421 trtype: tcp 00:16:21.421 adrfam: ipv4 00:16:21.421 subtype: discovery subsystem referral 00:16:21.421 treq: not required 00:16:21.421 portid: 0 00:16:21.421 trsvcid: 4430 00:16:21.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:21.421 traddr: 10.0.0.2 00:16:21.421 eflags: none 00:16:21.421 sectype: none 00:16:21.421 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:21.421 Perform nvmf subsystem discovery via RPC 00:16:21.421 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:21.421 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.421 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 [ 00:16:21.421 { 00:16:21.421 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.421 "subtype": "Discovery", 00:16:21.421 "listen_addresses": [ 00:16:21.421 { 00:16:21.421 "trtype": "TCP", 00:16:21.421 "adrfam": "IPv4", 00:16:21.421 "traddr": "10.0.0.2", 00:16:21.421 "trsvcid": "4420" 00:16:21.421 } 00:16:21.421 ], 00:16:21.421 "allow_any_host": true, 00:16:21.421 "hosts": [] 00:16:21.421 }, 00:16:21.421 { 00:16:21.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.421 "subtype": "NVMe", 00:16:21.421 "listen_addresses": [ 00:16:21.421 { 00:16:21.421 "trtype": "TCP", 00:16:21.421 "adrfam": "IPv4", 00:16:21.422 "traddr": "10.0.0.2", 00:16:21.422 "trsvcid": "4420" 00:16:21.422 } 00:16:21.422 ], 00:16:21.422 "allow_any_host": true, 00:16:21.422 "hosts": [], 00:16:21.422 "serial_number": "SPDK00000000000001", 00:16:21.422 "model_number": "SPDK bdev Controller", 00:16:21.422 "max_namespaces": 32, 00:16:21.422 "min_cntlid": 1, 00:16:21.422 "max_cntlid": 65519, 00:16:21.422 "namespaces": [ 00:16:21.422 { 00:16:21.422 "nsid": 1, 00:16:21.422 "bdev_name": "Null1", 00:16:21.422 "name": "Null1", 00:16:21.422 "nguid": "74338CF74D2B43D9851AE65075AF7427", 00:16:21.422 "uuid": "74338cf7-4d2b-43d9-851a-e65075af7427" 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:21.422 "subtype": "NVMe", 00:16:21.422 "listen_addresses": [ 00:16:21.422 { 00:16:21.422 "trtype": "TCP", 00:16:21.422 "adrfam": "IPv4", 00:16:21.422 "traddr": "10.0.0.2", 00:16:21.422 "trsvcid": "4420" 00:16:21.422 } 00:16:21.422 ], 00:16:21.422 "allow_any_host": true, 00:16:21.422 "hosts": [], 00:16:21.422 "serial_number": "SPDK00000000000002", 00:16:21.422 "model_number": "SPDK bdev Controller", 00:16:21.422 "max_namespaces": 32, 00:16:21.422 "min_cntlid": 1, 00:16:21.422 "max_cntlid": 65519, 00:16:21.422 "namespaces": [ 00:16:21.422 { 00:16:21.422 "nsid": 1, 00:16:21.422 "bdev_name": "Null2", 00:16:21.422 "name": "Null2", 00:16:21.422 "nguid": "045A0B38EA5B46DF97836FB02F282877", 00:16:21.422 "uuid": "045a0b38-ea5b-46df-9783-6fb02f282877" 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:21.422 "subtype": "NVMe", 00:16:21.422 "listen_addresses": [ 00:16:21.422 { 00:16:21.422 "trtype": "TCP", 00:16:21.422 "adrfam": "IPv4", 00:16:21.422 "traddr": "10.0.0.2", 00:16:21.422 "trsvcid": "4420" 00:16:21.422 } 00:16:21.422 ], 00:16:21.422 "allow_any_host": true, 00:16:21.422 "hosts": [], 00:16:21.422 "serial_number": "SPDK00000000000003", 00:16:21.422 "model_number": "SPDK bdev Controller", 00:16:21.422 "max_namespaces": 32, 00:16:21.422 "min_cntlid": 1, 00:16:21.422 "max_cntlid": 65519, 00:16:21.422 "namespaces": [ 00:16:21.422 { 00:16:21.422 "nsid": 1, 00:16:21.422 "bdev_name": "Null3", 00:16:21.422 "name": "Null3", 00:16:21.422 "nguid": "90E56ABEAAFD434A8AEEA1595441A72A", 00:16:21.422 "uuid": "90e56abe-aafd-434a-8aee-a1595441a72a" 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:21.422 "subtype": "NVMe", 00:16:21.422 "listen_addresses": [ 00:16:21.422 { 00:16:21.422 "trtype": "TCP", 00:16:21.422 "adrfam": "IPv4", 00:16:21.422 "traddr": "10.0.0.2", 00:16:21.422 "trsvcid": "4420" 00:16:21.422 } 00:16:21.422 ], 00:16:21.422 "allow_any_host": true, 00:16:21.422 "hosts": [], 00:16:21.422 "serial_number": "SPDK00000000000004", 00:16:21.422 "model_number": "SPDK bdev Controller", 00:16:21.422 "max_namespaces": 32, 00:16:21.422 "min_cntlid": 1, 00:16:21.422 "max_cntlid": 65519, 00:16:21.422 "namespaces": [ 00:16:21.422 { 00:16:21.422 "nsid": 1, 00:16:21.422 "bdev_name": "Null4", 00:16:21.422 "name": "Null4", 00:16:21.422 "nguid": "6853AC5A64464932B19BDEE48A104FBD", 00:16:21.422 "uuid": "6853ac5a-6446-4932-b19b-dee48a104fbd" 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.422 rmmod nvme_tcp 00:16:21.422 rmmod nvme_fabrics 00:16:21.422 rmmod nvme_keyring 00:16:21.422 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1665906 ']' 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1665906 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1665906 ']' 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1665906 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665906 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665906' 00:16:21.682 killing process with pid 1665906 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1665906 00:16:21.682 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1665906 00:16:21.940 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:21.940 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.941 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.478 00:16:24.478 real 0m7.101s 00:16:24.478 user 0m5.964s 00:16:24.478 sys 0m2.876s 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.478 ************************************ 00:16:24.478 END TEST nvmf_target_discovery 00:16:24.478 ************************************ 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.478 ************************************ 00:16:24.478 START TEST nvmf_referrals 00:16:24.478 ************************************ 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:24.478 * Looking for test storage... 00:16:24.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.478 --rc genhtml_branch_coverage=1 00:16:24.478 --rc genhtml_function_coverage=1 00:16:24.478 --rc genhtml_legend=1 00:16:24.478 --rc geninfo_all_blocks=1 00:16:24.478 --rc geninfo_unexecuted_blocks=1 00:16:24.478 00:16:24.478 ' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.478 --rc genhtml_branch_coverage=1 00:16:24.478 --rc genhtml_function_coverage=1 00:16:24.478 --rc genhtml_legend=1 00:16:24.478 --rc geninfo_all_blocks=1 00:16:24.478 --rc geninfo_unexecuted_blocks=1 00:16:24.478 00:16:24.478 ' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.478 --rc genhtml_branch_coverage=1 00:16:24.478 --rc genhtml_function_coverage=1 00:16:24.478 --rc genhtml_legend=1 00:16:24.478 --rc geninfo_all_blocks=1 00:16:24.478 --rc geninfo_unexecuted_blocks=1 00:16:24.478 00:16:24.478 ' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.478 --rc genhtml_branch_coverage=1 00:16:24.478 --rc genhtml_function_coverage=1 00:16:24.478 --rc genhtml_legend=1 00:16:24.478 --rc geninfo_all_blocks=1 00:16:24.478 --rc geninfo_unexecuted_blocks=1 00:16:24.478 00:16:24.478 ' 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.478 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.479 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:27.015 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:27.275 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:27.275 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:27.275 Found net devices under 0000:84:00.0: cvl_0_0 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:27.275 Found net devices under 0000:84:00.1: cvl_0_1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:27.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:16:27.275 00:16:27.275 --- 10.0.0.2 ping statistics --- 00:16:27.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.275 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:27.275 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:16:27.275 00:16:27.275 --- 10.0.0.1 ping statistics --- 00:16:27.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.276 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1668146 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1668146 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1668146 ']' 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.276 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:27.535 [2024-10-08 20:43:56.041131] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:16:27.535 [2024-10-08 20:43:56.041240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.535 [2024-10-08 20:43:56.168748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.794 [2024-10-08 20:43:56.394189] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.794 [2024-10-08 20:43:56.394308] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.794 [2024-10-08 20:43:56.394365] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.794 [2024-10-08 20:43:56.394410] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.794 [2024-10-08 20:43:56.394453] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.794 [2024-10-08 20:43:56.398300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.794 [2024-10-08 20:43:56.398449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.794 [2024-10-08 20:43:56.398452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.794 [2024-10-08 20:43:56.398356] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.794 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.794 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:16:27.794 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:27.794 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.794 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 [2024-10-08 20:43:56.577318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 [2024-10-08 20:43:56.589581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:28.053 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:28.313 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:28.572 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:28.831 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:29.091 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:29.350 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:29.350 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:29.351 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:29.609 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.867 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.867 rmmod nvme_tcp 00:16:29.867 rmmod nvme_fabrics 00:16:30.126 rmmod nvme_keyring 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1668146 ']' 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1668146 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1668146 ']' 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1668146 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1668146 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1668146' 00:16:30.126 killing process with pid 1668146 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1668146 00:16:30.126 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1668146 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.385 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.926 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:32.926 00:16:32.926 real 0m8.386s 00:16:32.926 user 0m12.451s 00:16:32.926 sys 0m3.147s 00:16:32.926 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.926 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:32.926 ************************************ 00:16:32.926 END TEST nvmf_referrals 00:16:32.926 ************************************ 00:16:32.926 20:44:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:32.926 20:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.927 ************************************ 00:16:32.927 START TEST nvmf_connect_disconnect 00:16:32.927 ************************************ 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:32.927 * Looking for test storage... 00:16:32.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.927 --rc genhtml_branch_coverage=1 00:16:32.927 --rc genhtml_function_coverage=1 00:16:32.927 --rc genhtml_legend=1 00:16:32.927 --rc geninfo_all_blocks=1 00:16:32.927 --rc geninfo_unexecuted_blocks=1 00:16:32.927 00:16:32.927 ' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.927 --rc genhtml_branch_coverage=1 00:16:32.927 --rc genhtml_function_coverage=1 00:16:32.927 --rc genhtml_legend=1 00:16:32.927 --rc geninfo_all_blocks=1 00:16:32.927 --rc geninfo_unexecuted_blocks=1 00:16:32.927 00:16:32.927 ' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.927 --rc genhtml_branch_coverage=1 00:16:32.927 --rc genhtml_function_coverage=1 00:16:32.927 --rc genhtml_legend=1 00:16:32.927 --rc geninfo_all_blocks=1 00:16:32.927 --rc geninfo_unexecuted_blocks=1 00:16:32.927 00:16:32.927 ' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.927 --rc genhtml_branch_coverage=1 00:16:32.927 --rc genhtml_function_coverage=1 00:16:32.927 --rc genhtml_legend=1 00:16:32.927 --rc geninfo_all_blocks=1 00:16:32.927 --rc geninfo_unexecuted_blocks=1 00:16:32.927 00:16:32.927 ' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.927 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:32.928 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:35.463 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:35.463 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:35.463 Found net devices under 0000:84:00.0: cvl_0_0 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:35.463 Found net devices under 0000:84:00.1: cvl_0_1 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:35.463 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:35.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:16:35.464 00:16:35.464 --- 10.0.0.2 ping statistics --- 00:16:35.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.464 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:35.464 00:16:35.464 --- 10.0.0.1 ping statistics --- 00:16:35.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.464 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:35.464 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1670590 00:16:35.724 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1670590 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1670590 ']' 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.725 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.725 [2024-10-08 20:44:04.305592] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:16:35.725 [2024-10-08 20:44:04.305700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.725 [2024-10-08 20:44:04.423289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.985 [2024-10-08 20:44:04.644468] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.985 [2024-10-08 20:44:04.644601] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.985 [2024-10-08 20:44:04.644675] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.985 [2024-10-08 20:44:04.644726] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.985 [2024-10-08 20:44:04.644767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.985 [2024-10-08 20:44:04.648585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.985 [2024-10-08 20:44:04.648746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.985 [2024-10-08 20:44:04.648776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.985 [2024-10-08 20:44:04.648780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.245 [2024-10-08 20:44:04.834907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.245 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:36.246 [2024-10-08 20:44:04.892538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:36.246 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:39.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.426 rmmod nvme_tcp 00:16:50.426 rmmod nvme_fabrics 00:16:50.426 rmmod nvme_keyring 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1670590 ']' 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1670590 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1670590 ']' 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1670590 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1670590 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1670590' 00:16:50.426 killing process with pid 1670590 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1670590 00:16:50.426 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1670590 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.684 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:53.224 00:16:53.224 real 0m20.249s 00:16:53.224 user 0m58.579s 00:16:53.224 sys 0m4.189s 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:53.224 ************************************ 00:16:53.224 END TEST nvmf_connect_disconnect 00:16:53.224 ************************************ 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:53.224 ************************************ 00:16:53.224 START TEST nvmf_multitarget 00:16:53.224 ************************************ 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:53.224 * Looking for test storage... 00:16:53.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:53.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.224 --rc genhtml_branch_coverage=1 00:16:53.224 --rc genhtml_function_coverage=1 00:16:53.224 --rc genhtml_legend=1 00:16:53.224 --rc geninfo_all_blocks=1 00:16:53.224 --rc geninfo_unexecuted_blocks=1 00:16:53.224 00:16:53.224 ' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:53.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.224 --rc genhtml_branch_coverage=1 00:16:53.224 --rc genhtml_function_coverage=1 00:16:53.224 --rc genhtml_legend=1 00:16:53.224 --rc geninfo_all_blocks=1 00:16:53.224 --rc geninfo_unexecuted_blocks=1 00:16:53.224 00:16:53.224 ' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:53.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.224 --rc genhtml_branch_coverage=1 00:16:53.224 --rc genhtml_function_coverage=1 00:16:53.224 --rc genhtml_legend=1 00:16:53.224 --rc geninfo_all_blocks=1 00:16:53.224 --rc geninfo_unexecuted_blocks=1 00:16:53.224 00:16:53.224 ' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:53.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.224 --rc genhtml_branch_coverage=1 00:16:53.224 --rc genhtml_function_coverage=1 00:16:53.224 --rc genhtml_legend=1 00:16:53.224 --rc geninfo_all_blocks=1 00:16:53.224 --rc geninfo_unexecuted_blocks=1 00:16:53.224 00:16:53.224 ' 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.224 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:53.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:53.225 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:56.559 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:56.559 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:56.559 Found net devices under 0000:84:00.0: cvl_0_0 00:16:56.559 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:56.560 Found net devices under 0000:84:00.1: cvl_0_1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:56.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:16:56.560 00:16:56.560 --- 10.0.0.2 ping statistics --- 00:16:56.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.560 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:16:56.560 00:16:56.560 --- 10.0.0.1 ping statistics --- 00:16:56.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.560 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1674513 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1674513 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1674513 ']' 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.560 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:56.560 [2024-10-08 20:44:25.016560] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:16:56.560 [2024-10-08 20:44:25.016674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.560 [2024-10-08 20:44:25.137718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.819 [2024-10-08 20:44:25.354874] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.819 [2024-10-08 20:44:25.354992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.819 [2024-10-08 20:44:25.355048] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.819 [2024-10-08 20:44:25.355094] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.819 [2024-10-08 20:44:25.355137] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.819 [2024-10-08 20:44:25.358769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.819 [2024-10-08 20:44:25.358872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.819 [2024-10-08 20:44:25.358968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.819 [2024-10-08 20:44:25.358972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:57.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:58.018 "nvmf_tgt_1" 00:16:58.018 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:58.278 "nvmf_tgt_2" 00:16:58.278 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:58.278 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:58.539 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:58.539 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:58.799 true 00:16:58.799 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:59.059 true 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.059 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.318 rmmod nvme_tcp 00:16:59.318 rmmod nvme_fabrics 00:16:59.318 rmmod nvme_keyring 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1674513 ']' 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1674513 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1674513 ']' 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1674513 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1674513 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1674513' 00:16:59.318 killing process with pid 1674513 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1674513 00:16:59.318 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1674513 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.887 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.791 00:17:01.791 real 0m8.903s 00:17:01.791 user 0m14.555s 00:17:01.791 sys 0m3.083s 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:01.791 ************************************ 00:17:01.791 END TEST nvmf_multitarget 00:17:01.791 ************************************ 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.791 ************************************ 00:17:01.791 START TEST nvmf_rpc 00:17:01.791 ************************************ 00:17:01.791 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:02.050 * Looking for test storage... 00:17:02.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.050 --rc genhtml_branch_coverage=1 00:17:02.050 --rc genhtml_function_coverage=1 00:17:02.050 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.050 --rc genhtml_branch_coverage=1 00:17:02.050 --rc genhtml_function_coverage=1 00:17:02.050 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.050 --rc genhtml_branch_coverage=1 00:17:02.050 --rc genhtml_function_coverage=1 00:17:02.050 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.050 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.050 --rc genhtml_branch_coverage=1 00:17:02.050 --rc genhtml_function_coverage=1 00:17:02.050 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.051 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:05.348 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:05.348 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:05.348 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:05.349 Found net devices under 0000:84:00.0: cvl_0_0 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:05.349 Found net devices under 0000:84:00.1: cvl_0_1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:05.349 00:17:05.349 --- 10.0.0.2 ping statistics --- 00:17:05.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.349 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:17:05.349 00:17:05.349 --- 10.0.0.1 ping statistics --- 00:17:05.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.349 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1676901 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1676901 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1676901 ']' 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.349 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.349 [2024-10-08 20:44:33.780505] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:05.349 [2024-10-08 20:44:33.780693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.349 [2024-10-08 20:44:33.935362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.610 [2024-10-08 20:44:34.160101] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.610 [2024-10-08 20:44:34.160204] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.610 [2024-10-08 20:44:34.160260] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.610 [2024-10-08 20:44:34.160306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.610 [2024-10-08 20:44:34.160367] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.610 [2024-10-08 20:44:34.164725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.610 [2024-10-08 20:44:34.164797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.610 [2024-10-08 20:44:34.164771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.610 [2024-10-08 20:44:34.164801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:05.610 "tick_rate": 2700000000, 00:17:05.610 "poll_groups": [ 00:17:05.610 { 00:17:05.610 "name": "nvmf_tgt_poll_group_000", 00:17:05.610 "admin_qpairs": 0, 00:17:05.610 "io_qpairs": 0, 00:17:05.610 "current_admin_qpairs": 0, 00:17:05.610 "current_io_qpairs": 0, 00:17:05.610 "pending_bdev_io": 0, 00:17:05.610 "completed_nvme_io": 0, 00:17:05.610 "transports": [] 00:17:05.610 }, 00:17:05.610 { 00:17:05.610 "name": "nvmf_tgt_poll_group_001", 00:17:05.610 "admin_qpairs": 0, 00:17:05.610 "io_qpairs": 0, 00:17:05.610 "current_admin_qpairs": 0, 00:17:05.610 "current_io_qpairs": 0, 00:17:05.610 "pending_bdev_io": 0, 00:17:05.610 "completed_nvme_io": 0, 00:17:05.610 "transports": [] 00:17:05.610 }, 00:17:05.610 { 00:17:05.610 "name": "nvmf_tgt_poll_group_002", 00:17:05.610 "admin_qpairs": 0, 00:17:05.610 "io_qpairs": 0, 00:17:05.610 "current_admin_qpairs": 0, 00:17:05.610 "current_io_qpairs": 0, 00:17:05.610 "pending_bdev_io": 0, 00:17:05.610 "completed_nvme_io": 0, 00:17:05.610 "transports": [] 00:17:05.610 }, 00:17:05.610 { 00:17:05.610 "name": "nvmf_tgt_poll_group_003", 00:17:05.610 "admin_qpairs": 0, 00:17:05.610 "io_qpairs": 0, 00:17:05.610 "current_admin_qpairs": 0, 00:17:05.610 "current_io_qpairs": 0, 00:17:05.610 "pending_bdev_io": 0, 00:17:05.610 "completed_nvme_io": 0, 00:17:05.610 "transports": [] 00:17:05.610 } 00:17:05.610 ] 00:17:05.610 }' 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:05.610 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.870 [2024-10-08 20:44:34.523197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:05.870 "tick_rate": 2700000000, 00:17:05.870 "poll_groups": [ 00:17:05.870 { 00:17:05.870 "name": "nvmf_tgt_poll_group_000", 00:17:05.870 "admin_qpairs": 0, 00:17:05.870 "io_qpairs": 0, 00:17:05.870 "current_admin_qpairs": 0, 00:17:05.870 "current_io_qpairs": 0, 00:17:05.870 "pending_bdev_io": 0, 00:17:05.870 "completed_nvme_io": 0, 00:17:05.870 "transports": [ 00:17:05.870 { 00:17:05.870 "trtype": "TCP" 00:17:05.870 } 00:17:05.870 ] 00:17:05.870 }, 00:17:05.870 { 00:17:05.870 "name": "nvmf_tgt_poll_group_001", 00:17:05.870 "admin_qpairs": 0, 00:17:05.870 "io_qpairs": 0, 00:17:05.870 "current_admin_qpairs": 0, 00:17:05.870 "current_io_qpairs": 0, 00:17:05.870 "pending_bdev_io": 0, 00:17:05.870 "completed_nvme_io": 0, 00:17:05.870 "transports": [ 00:17:05.870 { 00:17:05.870 "trtype": "TCP" 00:17:05.870 } 00:17:05.870 ] 00:17:05.870 }, 00:17:05.870 { 00:17:05.870 "name": "nvmf_tgt_poll_group_002", 00:17:05.870 "admin_qpairs": 0, 00:17:05.870 "io_qpairs": 0, 00:17:05.870 "current_admin_qpairs": 0, 00:17:05.870 "current_io_qpairs": 0, 00:17:05.870 "pending_bdev_io": 0, 00:17:05.870 "completed_nvme_io": 0, 00:17:05.870 "transports": [ 00:17:05.870 { 00:17:05.870 "trtype": "TCP" 00:17:05.870 } 00:17:05.870 ] 00:17:05.870 }, 00:17:05.870 { 00:17:05.870 "name": "nvmf_tgt_poll_group_003", 00:17:05.870 "admin_qpairs": 0, 00:17:05.870 "io_qpairs": 0, 00:17:05.870 "current_admin_qpairs": 0, 00:17:05.870 "current_io_qpairs": 0, 00:17:05.870 "pending_bdev_io": 0, 00:17:05.870 "completed_nvme_io": 0, 00:17:05.870 "transports": [ 00:17:05.870 { 00:17:05.870 "trtype": "TCP" 00:17:05.870 } 00:17:05.870 ] 00:17:05.870 } 00:17:05.870 ] 00:17:05.870 }' 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:05.870 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:06.129 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:06.129 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:06.129 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 Malloc1 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 [2024-10-08 20:44:34.732942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:17:06.130 [2024-10-08 20:44:34.765725] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:17:06.130 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:06.130 could not add new controller: failed to write to nvme-fabrics device 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.699 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.699 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.699 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.699 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.699 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.239 [2024-10-08 20:44:37.584749] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:17:09.239 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:09.239 could not add new controller: failed to write to nvme-fabrics device 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.239 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.812 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.812 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.812 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.812 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:09.812 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 [2024-10-08 20:44:40.455507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.718 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.656 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.656 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.656 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.656 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:12.656 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 [2024-10-08 20:44:43.295277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.501 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.501 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.501 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.501 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.501 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.405 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.405 [2024-10-08 20:44:46.164314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.664 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.231 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.231 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.231 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.231 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:18.231 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.136 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 [2024-10-08 20:44:48.952287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.395 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.962 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.962 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.962 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.962 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:20.962 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:22.867 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:22.867 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:22.867 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 [2024-10-08 20:44:51.745499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.127 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.063 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:24.063 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:24.063 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.063 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:24.063 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 [2024-10-08 20:44:54.615103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.970 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 [2024-10-08 20:44:54.663131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 [2024-10-08 20:44:54.711266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 [2024-10-08 20:44:54.759427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 [2024-10-08 20:44:54.807587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:26.231 "tick_rate": 2700000000, 00:17:26.231 "poll_groups": [ 00:17:26.231 { 00:17:26.231 "name": "nvmf_tgt_poll_group_000", 00:17:26.231 "admin_qpairs": 2, 00:17:26.231 "io_qpairs": 84, 00:17:26.231 "current_admin_qpairs": 0, 00:17:26.231 "current_io_qpairs": 0, 00:17:26.231 "pending_bdev_io": 0, 00:17:26.231 "completed_nvme_io": 182, 00:17:26.231 "transports": [ 00:17:26.231 { 00:17:26.231 "trtype": "TCP" 00:17:26.231 } 00:17:26.231 ] 00:17:26.231 }, 00:17:26.231 { 00:17:26.231 "name": "nvmf_tgt_poll_group_001", 00:17:26.231 "admin_qpairs": 2, 00:17:26.231 "io_qpairs": 84, 00:17:26.231 "current_admin_qpairs": 0, 00:17:26.231 "current_io_qpairs": 0, 00:17:26.231 "pending_bdev_io": 0, 00:17:26.231 "completed_nvme_io": 108, 00:17:26.231 "transports": [ 00:17:26.231 { 00:17:26.231 "trtype": "TCP" 00:17:26.231 } 00:17:26.231 ] 00:17:26.231 }, 00:17:26.231 { 00:17:26.231 "name": "nvmf_tgt_poll_group_002", 00:17:26.231 "admin_qpairs": 1, 00:17:26.231 "io_qpairs": 84, 00:17:26.231 "current_admin_qpairs": 0, 00:17:26.231 "current_io_qpairs": 0, 00:17:26.231 "pending_bdev_io": 0, 00:17:26.231 "completed_nvme_io": 162, 00:17:26.231 "transports": [ 00:17:26.231 { 00:17:26.231 "trtype": "TCP" 00:17:26.231 } 00:17:26.231 ] 00:17:26.231 }, 00:17:26.231 { 00:17:26.231 "name": "nvmf_tgt_poll_group_003", 00:17:26.231 "admin_qpairs": 2, 00:17:26.231 "io_qpairs": 84, 00:17:26.231 "current_admin_qpairs": 0, 00:17:26.231 "current_io_qpairs": 0, 00:17:26.231 "pending_bdev_io": 0, 00:17:26.231 "completed_nvme_io": 234, 00:17:26.231 "transports": [ 00:17:26.231 { 00:17:26.231 "trtype": "TCP" 00:17:26.231 } 00:17:26.231 ] 00:17:26.231 } 00:17:26.231 ] 00:17:26.231 }' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:26.231 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.232 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.232 rmmod nvme_tcp 00:17:26.491 rmmod nvme_fabrics 00:17:26.491 rmmod nvme_keyring 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1676901 ']' 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1676901 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1676901 ']' 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1676901 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1676901 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1676901' 00:17:26.491 killing process with pid 1676901 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1676901 00:17:26.491 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1676901 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.060 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.965 00:17:28.965 real 0m27.072s 00:17:28.965 user 1m25.123s 00:17:28.965 sys 0m5.065s 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.965 ************************************ 00:17:28.965 END TEST nvmf_rpc 00:17:28.965 ************************************ 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.965 ************************************ 00:17:28.965 START TEST nvmf_invalid 00:17:28.965 ************************************ 00:17:28.965 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:29.224 * Looking for test storage... 00:17:29.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.224 --rc genhtml_branch_coverage=1 00:17:29.224 --rc genhtml_function_coverage=1 00:17:29.224 --rc genhtml_legend=1 00:17:29.224 --rc geninfo_all_blocks=1 00:17:29.224 --rc geninfo_unexecuted_blocks=1 00:17:29.224 00:17:29.224 ' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.224 --rc genhtml_branch_coverage=1 00:17:29.224 --rc genhtml_function_coverage=1 00:17:29.224 --rc genhtml_legend=1 00:17:29.224 --rc geninfo_all_blocks=1 00:17:29.224 --rc geninfo_unexecuted_blocks=1 00:17:29.224 00:17:29.224 ' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.224 --rc genhtml_branch_coverage=1 00:17:29.224 --rc genhtml_function_coverage=1 00:17:29.224 --rc genhtml_legend=1 00:17:29.224 --rc geninfo_all_blocks=1 00:17:29.224 --rc geninfo_unexecuted_blocks=1 00:17:29.224 00:17:29.224 ' 00:17:29.224 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.225 --rc genhtml_branch_coverage=1 00:17:29.225 --rc genhtml_function_coverage=1 00:17:29.225 --rc genhtml_legend=1 00:17:29.225 --rc geninfo_all_blocks=1 00:17:29.225 --rc geninfo_unexecuted_blocks=1 00:17:29.225 00:17:29.225 ' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:29.225 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:32.517 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:32.517 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:32.517 Found net devices under 0000:84:00.0: cvl_0_0 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:32.517 Found net devices under 0000:84:00.1: cvl_0_1 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:32.517 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:17:32.518 00:17:32.518 --- 10.0.0.2 ping statistics --- 00:17:32.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.518 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:32.518 00:17:32.518 --- 10.0.0.1 ping statistics --- 00:17:32.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.518 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1681566 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1681566 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1681566 ']' 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.518 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:32.518 [2024-10-08 20:45:00.977284] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:32.518 [2024-10-08 20:45:00.977375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.518 [2024-10-08 20:45:01.077846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.778 [2024-10-08 20:45:01.301263] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.778 [2024-10-08 20:45:01.301360] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.778 [2024-10-08 20:45:01.301396] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.778 [2024-10-08 20:45:01.301426] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.778 [2024-10-08 20:45:01.301451] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.778 [2024-10-08 20:45:01.305209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.778 [2024-10-08 20:45:01.305312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.778 [2024-10-08 20:45:01.305415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.778 [2024-10-08 20:45:01.305419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:33.774 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4238 00:17:34.032 [2024-10-08 20:45:02.598357] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:34.032 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:34.032 { 00:17:34.032 "nqn": "nqn.2016-06.io.spdk:cnode4238", 00:17:34.032 "tgt_name": "foobar", 00:17:34.032 "method": "nvmf_create_subsystem", 00:17:34.032 "req_id": 1 00:17:34.032 } 00:17:34.032 Got JSON-RPC error response 00:17:34.032 response: 00:17:34.032 { 00:17:34.032 "code": -32603, 00:17:34.032 "message": "Unable to find target foobar" 00:17:34.032 }' 00:17:34.032 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:34.032 { 00:17:34.032 "nqn": "nqn.2016-06.io.spdk:cnode4238", 00:17:34.032 "tgt_name": "foobar", 00:17:34.032 "method": "nvmf_create_subsystem", 00:17:34.032 "req_id": 1 00:17:34.032 } 00:17:34.032 Got JSON-RPC error response 00:17:34.032 response: 00:17:34.032 { 00:17:34.032 "code": -32603, 00:17:34.032 "message": "Unable to find target foobar" 00:17:34.032 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:34.032 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:34.032 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2512 00:17:34.597 [2024-10-08 20:45:03.244503] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2512: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:34.597 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:34.597 { 00:17:34.597 "nqn": "nqn.2016-06.io.spdk:cnode2512", 00:17:34.597 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:34.597 "method": "nvmf_create_subsystem", 00:17:34.597 "req_id": 1 00:17:34.597 } 00:17:34.597 Got JSON-RPC error response 00:17:34.597 response: 00:17:34.597 { 00:17:34.597 "code": -32602, 00:17:34.597 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:34.597 }' 00:17:34.597 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:34.597 { 00:17:34.597 "nqn": "nqn.2016-06.io.spdk:cnode2512", 00:17:34.597 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:34.597 "method": "nvmf_create_subsystem", 00:17:34.597 "req_id": 1 00:17:34.597 } 00:17:34.597 Got JSON-RPC error response 00:17:34.597 response: 00:17:34.597 { 00:17:34.597 "code": -32602, 00:17:34.597 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:34.597 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:34.597 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:34.597 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9875 00:17:35.162 [2024-10-08 20:45:03.826391] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9875: invalid model number 'SPDK_Controller' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:35.162 { 00:17:35.162 "nqn": "nqn.2016-06.io.spdk:cnode9875", 00:17:35.162 "model_number": "SPDK_Controller\u001f", 00:17:35.162 "method": "nvmf_create_subsystem", 00:17:35.162 "req_id": 1 00:17:35.162 } 00:17:35.162 Got JSON-RPC error response 00:17:35.162 response: 00:17:35.162 { 00:17:35.162 "code": -32602, 00:17:35.162 "message": "Invalid MN SPDK_Controller\u001f" 00:17:35.162 }' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:35.162 { 00:17:35.162 "nqn": "nqn.2016-06.io.spdk:cnode9875", 00:17:35.162 "model_number": "SPDK_Controller\u001f", 00:17:35.162 "method": "nvmf_create_subsystem", 00:17:35.162 "req_id": 1 00:17:35.162 } 00:17:35.162 Got JSON-RPC error response 00:17:35.162 response: 00:17:35.162 { 00:17:35.162 "code": -32602, 00:17:35.162 "message": "Invalid MN SPDK_Controller\u001f" 00:17:35.162 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:35.162 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:35.163 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:35.163 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.163 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'qGwo.]iP|XCXyzN:G.8-2' 00:17:35.420 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'qGwo.]iP|XCXyzN:G.8-2' nqn.2016-06.io.spdk:cnode11333 00:17:35.985 [2024-10-08 20:45:04.584937] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11333: invalid serial number 'qGwo.]iP|XCXyzN:G.8-2' 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:35.985 { 00:17:35.985 "nqn": "nqn.2016-06.io.spdk:cnode11333", 00:17:35.985 "serial_number": "qGwo.]iP|XCXyzN:G.8-2", 00:17:35.985 "method": "nvmf_create_subsystem", 00:17:35.985 "req_id": 1 00:17:35.985 } 00:17:35.985 Got JSON-RPC error response 00:17:35.985 response: 00:17:35.985 { 00:17:35.985 "code": -32602, 00:17:35.985 "message": "Invalid SN qGwo.]iP|XCXyzN:G.8-2" 00:17:35.985 }' 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:35.985 { 00:17:35.985 "nqn": "nqn.2016-06.io.spdk:cnode11333", 00:17:35.985 "serial_number": "qGwo.]iP|XCXyzN:G.8-2", 00:17:35.985 "method": "nvmf_create_subsystem", 00:17:35.985 "req_id": 1 00:17:35.985 } 00:17:35.985 Got JSON-RPC error response 00:17:35.985 response: 00:17:35.985 { 00:17:35.985 "code": -32602, 00:17:35.985 "message": "Invalid SN qGwo.]iP|XCXyzN:G.8-2" 00:17:35.985 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:35.985 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:35.986 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.987 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:36.244 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\/vF;O' 00:17:36.245 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\/vF;O' nqn.2016-06.io.spdk:cnode21176 00:17:36.810 [2024-10-08 20:45:05.311247] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21176: invalid model number '^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\/vF;O' 00:17:36.810 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:36.810 { 00:17:36.810 "nqn": "nqn.2016-06.io.spdk:cnode21176", 00:17:36.810 "model_number": "^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\\/vF;\u007fO", 00:17:36.810 "method": "nvmf_create_subsystem", 00:17:36.810 "req_id": 1 00:17:36.810 } 00:17:36.810 Got JSON-RPC error response 00:17:36.810 response: 00:17:36.810 { 00:17:36.810 "code": -32602, 00:17:36.810 "message": "Invalid MN ^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\\/vF;\u007fO" 00:17:36.810 }' 00:17:36.810 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:36.810 { 00:17:36.810 "nqn": "nqn.2016-06.io.spdk:cnode21176", 00:17:36.810 "model_number": "^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\\/vF;\u007fO", 00:17:36.810 "method": "nvmf_create_subsystem", 00:17:36.810 "req_id": 1 00:17:36.810 } 00:17:36.810 Got JSON-RPC error response 00:17:36.810 response: 00:17:36.810 { 00:17:36.810 "code": -32602, 00:17:36.810 "message": "Invalid MN ^/7!/p03.f{`$W]9 $`8=_-mof#^wE)g%s\\/vF;\u007fO" 00:17:36.810 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:36.810 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:37.067 [2024-10-08 20:45:05.692602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.067 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:37.632 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:37.632 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:37.632 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:37.632 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:37.632 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:37.889 [2024-10-08 20:45:06.643785] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:38.147 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:38.147 { 00:17:38.147 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:38.147 "listen_address": { 00:17:38.147 "trtype": "tcp", 00:17:38.147 "traddr": "", 00:17:38.147 "trsvcid": "4421" 00:17:38.147 }, 00:17:38.147 "method": "nvmf_subsystem_remove_listener", 00:17:38.147 "req_id": 1 00:17:38.147 } 00:17:38.147 Got JSON-RPC error response 00:17:38.147 response: 00:17:38.147 { 00:17:38.147 "code": -32602, 00:17:38.147 "message": "Invalid parameters" 00:17:38.147 }' 00:17:38.147 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:38.147 { 00:17:38.147 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:38.147 "listen_address": { 00:17:38.147 "trtype": "tcp", 00:17:38.147 "traddr": "", 00:17:38.147 "trsvcid": "4421" 00:17:38.147 }, 00:17:38.147 "method": "nvmf_subsystem_remove_listener", 00:17:38.147 "req_id": 1 00:17:38.147 } 00:17:38.147 Got JSON-RPC error response 00:17:38.147 response: 00:17:38.147 { 00:17:38.147 "code": -32602, 00:17:38.147 "message": "Invalid parameters" 00:17:38.147 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:38.147 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1031 -i 0 00:17:38.404 [2024-10-08 20:45:07.025019] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1031: invalid cntlid range [0-65519] 00:17:38.404 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:38.404 { 00:17:38.404 "nqn": "nqn.2016-06.io.spdk:cnode1031", 00:17:38.404 "min_cntlid": 0, 00:17:38.404 "method": "nvmf_create_subsystem", 00:17:38.404 "req_id": 1 00:17:38.404 } 00:17:38.404 Got JSON-RPC error response 00:17:38.404 response: 00:17:38.404 { 00:17:38.404 "code": -32602, 00:17:38.404 "message": "Invalid cntlid range [0-65519]" 00:17:38.404 }' 00:17:38.404 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:38.404 { 00:17:38.404 "nqn": "nqn.2016-06.io.spdk:cnode1031", 00:17:38.404 "min_cntlid": 0, 00:17:38.404 "method": "nvmf_create_subsystem", 00:17:38.404 "req_id": 1 00:17:38.404 } 00:17:38.404 Got JSON-RPC error response 00:17:38.404 response: 00:17:38.404 { 00:17:38.404 "code": -32602, 00:17:38.404 "message": "Invalid cntlid range [0-65519]" 00:17:38.404 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.404 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12548 -i 65520 00:17:38.662 [2024-10-08 20:45:07.402268] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12548: invalid cntlid range [65520-65519] 00:17:38.919 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:38.919 { 00:17:38.919 "nqn": "nqn.2016-06.io.spdk:cnode12548", 00:17:38.919 "min_cntlid": 65520, 00:17:38.919 "method": "nvmf_create_subsystem", 00:17:38.919 "req_id": 1 00:17:38.919 } 00:17:38.919 Got JSON-RPC error response 00:17:38.919 response: 00:17:38.919 { 00:17:38.919 "code": -32602, 00:17:38.919 "message": "Invalid cntlid range [65520-65519]" 00:17:38.919 }' 00:17:38.919 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:38.919 { 00:17:38.919 "nqn": "nqn.2016-06.io.spdk:cnode12548", 00:17:38.919 "min_cntlid": 65520, 00:17:38.919 "method": "nvmf_create_subsystem", 00:17:38.919 "req_id": 1 00:17:38.919 } 00:17:38.919 Got JSON-RPC error response 00:17:38.919 response: 00:17:38.919 { 00:17:38.919 "code": -32602, 00:17:38.919 "message": "Invalid cntlid range [65520-65519]" 00:17:38.919 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.919 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21816 -I 0 00:17:39.485 [2024-10-08 20:45:07.964154] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21816: invalid cntlid range [1-0] 00:17:39.485 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:39.485 { 00:17:39.485 "nqn": "nqn.2016-06.io.spdk:cnode21816", 00:17:39.485 "max_cntlid": 0, 00:17:39.485 "method": "nvmf_create_subsystem", 00:17:39.485 "req_id": 1 00:17:39.485 } 00:17:39.485 Got JSON-RPC error response 00:17:39.485 response: 00:17:39.485 { 00:17:39.485 "code": -32602, 00:17:39.485 "message": "Invalid cntlid range [1-0]" 00:17:39.485 }' 00:17:39.485 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:39.485 { 00:17:39.485 "nqn": "nqn.2016-06.io.spdk:cnode21816", 00:17:39.485 "max_cntlid": 0, 00:17:39.485 "method": "nvmf_create_subsystem", 00:17:39.485 "req_id": 1 00:17:39.485 } 00:17:39.485 Got JSON-RPC error response 00:17:39.485 response: 00:17:39.485 { 00:17:39.485 "code": -32602, 00:17:39.485 "message": "Invalid cntlid range [1-0]" 00:17:39.485 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.485 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1843 -I 65520 00:17:39.741 [2024-10-08 20:45:08.289237] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1843: invalid cntlid range [1-65520] 00:17:39.741 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:39.741 { 00:17:39.741 "nqn": "nqn.2016-06.io.spdk:cnode1843", 00:17:39.741 "max_cntlid": 65520, 00:17:39.741 "method": "nvmf_create_subsystem", 00:17:39.741 "req_id": 1 00:17:39.741 } 00:17:39.741 Got JSON-RPC error response 00:17:39.741 response: 00:17:39.741 { 00:17:39.741 "code": -32602, 00:17:39.741 "message": "Invalid cntlid range [1-65520]" 00:17:39.741 }' 00:17:39.741 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:39.741 { 00:17:39.741 "nqn": "nqn.2016-06.io.spdk:cnode1843", 00:17:39.741 "max_cntlid": 65520, 00:17:39.741 "method": "nvmf_create_subsystem", 00:17:39.741 "req_id": 1 00:17:39.741 } 00:17:39.741 Got JSON-RPC error response 00:17:39.741 response: 00:17:39.741 { 00:17:39.741 "code": -32602, 00:17:39.741 "message": "Invalid cntlid range [1-65520]" 00:17:39.741 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.741 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9668 -i 6 -I 5 00:17:39.998 [2024-10-08 20:45:08.650496] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9668: invalid cntlid range [6-5] 00:17:39.998 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:39.998 { 00:17:39.998 "nqn": "nqn.2016-06.io.spdk:cnode9668", 00:17:39.998 "min_cntlid": 6, 00:17:39.998 "max_cntlid": 5, 00:17:39.998 "method": "nvmf_create_subsystem", 00:17:39.998 "req_id": 1 00:17:39.998 } 00:17:39.998 Got JSON-RPC error response 00:17:39.998 response: 00:17:39.998 { 00:17:39.998 "code": -32602, 00:17:39.998 "message": "Invalid cntlid range [6-5]" 00:17:39.998 }' 00:17:39.998 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:39.998 { 00:17:39.998 "nqn": "nqn.2016-06.io.spdk:cnode9668", 00:17:39.998 "min_cntlid": 6, 00:17:39.998 "max_cntlid": 5, 00:17:39.998 "method": "nvmf_create_subsystem", 00:17:39.998 "req_id": 1 00:17:39.998 } 00:17:39.998 Got JSON-RPC error response 00:17:39.998 response: 00:17:39.998 { 00:17:39.998 "code": -32602, 00:17:39.998 "message": "Invalid cntlid range [6-5]" 00:17:39.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.998 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:40.256 { 00:17:40.256 "name": "foobar", 00:17:40.256 "method": "nvmf_delete_target", 00:17:40.256 "req_id": 1 00:17:40.256 } 00:17:40.256 Got JSON-RPC error response 00:17:40.256 response: 00:17:40.256 { 00:17:40.256 "code": -32602, 00:17:40.256 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:40.256 }' 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:40.256 { 00:17:40.256 "name": "foobar", 00:17:40.256 "method": "nvmf_delete_target", 00:17:40.256 "req_id": 1 00:17:40.256 } 00:17:40.256 Got JSON-RPC error response 00:17:40.256 response: 00:17:40.256 { 00:17:40.256 "code": -32602, 00:17:40.256 "message": "The specified target doesn't exist, cannot delete it." 00:17:40.256 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.256 rmmod nvme_tcp 00:17:40.256 rmmod nvme_fabrics 00:17:40.256 rmmod nvme_keyring 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1681566 ']' 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1681566 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1681566 ']' 00:17:40.256 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1681566 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1681566 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1681566' 00:17:40.257 killing process with pid 1681566 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1681566 00:17:40.257 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1681566 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.826 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.739 00:17:42.739 real 0m13.754s 00:17:42.739 user 0m38.673s 00:17:42.739 sys 0m3.643s 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:42.739 ************************************ 00:17:42.739 END TEST nvmf_invalid 00:17:42.739 ************************************ 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.739 ************************************ 00:17:42.739 START TEST nvmf_connect_stress 00:17:42.739 ************************************ 00:17:42.739 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:42.999 * Looking for test storage... 00:17:42.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.999 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:42.999 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:42.999 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.260 --rc genhtml_branch_coverage=1 00:17:43.260 --rc genhtml_function_coverage=1 00:17:43.260 --rc genhtml_legend=1 00:17:43.260 --rc geninfo_all_blocks=1 00:17:43.260 --rc geninfo_unexecuted_blocks=1 00:17:43.260 00:17:43.260 ' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.260 --rc genhtml_branch_coverage=1 00:17:43.260 --rc genhtml_function_coverage=1 00:17:43.260 --rc genhtml_legend=1 00:17:43.260 --rc geninfo_all_blocks=1 00:17:43.260 --rc geninfo_unexecuted_blocks=1 00:17:43.260 00:17:43.260 ' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.260 --rc genhtml_branch_coverage=1 00:17:43.260 --rc genhtml_function_coverage=1 00:17:43.260 --rc genhtml_legend=1 00:17:43.260 --rc geninfo_all_blocks=1 00:17:43.260 --rc geninfo_unexecuted_blocks=1 00:17:43.260 00:17:43.260 ' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:43.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.260 --rc genhtml_branch_coverage=1 00:17:43.260 --rc genhtml_function_coverage=1 00:17:43.260 --rc genhtml_legend=1 00:17:43.260 --rc geninfo_all_blocks=1 00:17:43.260 --rc geninfo_unexecuted_blocks=1 00:17:43.260 00:17:43.260 ' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.260 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.261 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.554 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:46.555 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:46.555 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:46.555 Found net devices under 0000:84:00.0: cvl_0_0 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:46.555 Found net devices under 0000:84:00.1: cvl_0_1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:17:46.555 00:17:46.555 --- 10.0.0.2 ping statistics --- 00:17:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.555 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:46.555 00:17:46.555 --- 10.0.0.1 ping statistics --- 00:17:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.555 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1685330 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1685330 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1685330 ']' 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.555 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.556 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.556 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.556 [2024-10-08 20:45:14.933778] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:46.556 [2024-10-08 20:45:14.933951] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.556 [2024-10-08 20:45:15.096380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:46.814 [2024-10-08 20:45:15.326002] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.814 [2024-10-08 20:45:15.326102] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.814 [2024-10-08 20:45:15.326139] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.814 [2024-10-08 20:45:15.326169] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.814 [2024-10-08 20:45:15.326195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.814 [2024-10-08 20:45:15.328296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.814 [2024-10-08 20:45:15.328402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.815 [2024-10-08 20:45:15.328405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.815 [2024-10-08 20:45:15.490256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.815 [2024-10-08 20:45:15.525795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.815 NULL1 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1685473 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.815 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.073 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.331 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.331 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:47.331 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.331 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.331 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.588 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.588 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:47.588 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.588 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.588 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.846 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.846 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:47.846 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.846 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.846 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.411 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.411 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:48.411 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.411 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.411 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.668 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.668 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:48.668 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.668 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.668 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.926 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.926 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:48.926 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.926 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.926 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.183 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.183 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:49.183 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.183 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.183 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.440 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.440 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:49.440 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.440 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.440 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.005 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.005 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:50.005 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.005 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.005 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.262 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:50.262 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.262 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.262 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.520 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.520 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:50.520 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.520 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.520 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.778 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.778 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:50.778 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.778 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.778 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.035 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.035 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:51.035 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.035 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.035 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.599 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.599 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:51.599 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.599 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.599 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.856 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.856 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:51.856 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.856 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.856 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.114 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.114 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:52.114 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.114 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.114 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.371 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.371 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:52.371 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.371 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.371 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.628 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.628 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:52.628 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.628 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.628 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.193 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.193 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:53.193 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.193 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.193 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.452 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.452 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:53.452 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.452 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.452 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.710 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.710 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:53.710 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.710 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.710 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.968 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:53.968 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.968 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.968 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.534 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.534 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:54.534 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.534 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.534 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.792 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:54.792 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.792 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.792 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.050 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.050 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:55.050 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.050 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.050 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.307 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:55.308 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.308 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.308 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.565 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.565 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:55.565 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.565 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.566 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.130 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.130 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:56.130 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.130 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.130 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.388 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.388 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:56.388 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.388 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.388 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.646 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.646 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:56.646 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.646 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.646 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.903 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.903 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:56.903 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.903 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.903 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.161 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1685473 00:17:57.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1685473) - No such process 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1685473 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.161 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.161 rmmod nvme_tcp 00:17:57.161 rmmod nvme_fabrics 00:17:57.421 rmmod nvme_keyring 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1685330 ']' 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1685330 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1685330 ']' 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1685330 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.421 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685330 00:17:57.421 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:57.421 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:57.421 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685330' 00:17:57.421 killing process with pid 1685330 00:17:57.421 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1685330 00:17:57.421 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1685330 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.680 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.222 00:18:00.222 real 0m16.957s 00:18:00.222 user 0m39.420s 00:18:00.222 sys 0m7.156s 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 ************************************ 00:18:00.222 END TEST nvmf_connect_stress 00:18:00.222 ************************************ 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 ************************************ 00:18:00.222 START TEST nvmf_fused_ordering 00:18:00.222 ************************************ 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:00.222 * Looking for test storage... 00:18:00.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:00.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.222 --rc genhtml_branch_coverage=1 00:18:00.222 --rc genhtml_function_coverage=1 00:18:00.222 --rc genhtml_legend=1 00:18:00.222 --rc geninfo_all_blocks=1 00:18:00.222 --rc geninfo_unexecuted_blocks=1 00:18:00.222 00:18:00.222 ' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:00.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.222 --rc genhtml_branch_coverage=1 00:18:00.222 --rc genhtml_function_coverage=1 00:18:00.222 --rc genhtml_legend=1 00:18:00.222 --rc geninfo_all_blocks=1 00:18:00.222 --rc geninfo_unexecuted_blocks=1 00:18:00.222 00:18:00.222 ' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:00.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.222 --rc genhtml_branch_coverage=1 00:18:00.222 --rc genhtml_function_coverage=1 00:18:00.222 --rc genhtml_legend=1 00:18:00.222 --rc geninfo_all_blocks=1 00:18:00.222 --rc geninfo_unexecuted_blocks=1 00:18:00.222 00:18:00.222 ' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:00.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.222 --rc genhtml_branch_coverage=1 00:18:00.222 --rc genhtml_function_coverage=1 00:18:00.222 --rc genhtml_legend=1 00:18:00.222 --rc geninfo_all_blocks=1 00:18:00.222 --rc geninfo_unexecuted_blocks=1 00:18:00.222 00:18:00.222 ' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.222 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.223 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:03.516 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:03.516 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:03.516 Found net devices under 0000:84:00.0: cvl_0_0 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:03.516 Found net devices under 0000:84:00.1: cvl_0_1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.516 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.517 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:03.517 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:03.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:18:03.517 00:18:03.517 --- 10.0.0.2 ping statistics --- 00:18:03.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.517 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:18:03.517 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:18:03.517 00:18:03.517 --- 10.0.0.1 ping statistics --- 00:18:03.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.517 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:03.517 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1688771 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1688771 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1688771 ']' 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.517 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.517 [2024-10-08 20:45:32.145270] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:03.517 [2024-10-08 20:45:32.145458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.776 [2024-10-08 20:45:32.309231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.776 [2024-10-08 20:45:32.504213] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.776 [2024-10-08 20:45:32.504279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.776 [2024-10-08 20:45:32.504297] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.776 [2024-10-08 20:45:32.504311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.776 [2024-10-08 20:45:32.504324] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.776 [2024-10-08 20:45:32.505081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 [2024-10-08 20:45:32.761842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 [2024-10-08 20:45:32.782799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 NULL1 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.035 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.294 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:04.294 [2024-10-08 20:45:32.849391] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:04.294 [2024-10-08 20:45:32.849487] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688802 ] 00:18:05.229 Attached to nqn.2016-06.io.spdk:cnode1 00:18:05.229 Namespace ID: 1 size: 1GB 00:18:05.229 fused_ordering(0) 00:18:05.229 fused_ordering(1) 00:18:05.229 fused_ordering(2) 00:18:05.229 fused_ordering(3) 00:18:05.229 fused_ordering(4) 00:18:05.229 fused_ordering(5) 00:18:05.229 fused_ordering(6) 00:18:05.229 fused_ordering(7) 00:18:05.229 fused_ordering(8) 00:18:05.229 fused_ordering(9) 00:18:05.229 fused_ordering(10) 00:18:05.229 fused_ordering(11) 00:18:05.229 fused_ordering(12) 00:18:05.229 fused_ordering(13) 00:18:05.229 fused_ordering(14) 00:18:05.229 fused_ordering(15) 00:18:05.229 fused_ordering(16) 00:18:05.229 fused_ordering(17) 00:18:05.229 fused_ordering(18) 00:18:05.229 fused_ordering(19) 00:18:05.229 fused_ordering(20) 00:18:05.229 fused_ordering(21) 00:18:05.229 fused_ordering(22) 00:18:05.229 fused_ordering(23) 00:18:05.229 fused_ordering(24) 00:18:05.229 fused_ordering(25) 00:18:05.229 fused_ordering(26) 00:18:05.229 fused_ordering(27) 00:18:05.229 fused_ordering(28) 00:18:05.229 fused_ordering(29) 00:18:05.229 fused_ordering(30) 00:18:05.229 fused_ordering(31) 00:18:05.229 fused_ordering(32) 00:18:05.229 fused_ordering(33) 00:18:05.229 fused_ordering(34) 00:18:05.229 fused_ordering(35) 00:18:05.229 fused_ordering(36) 00:18:05.229 fused_ordering(37) 00:18:05.229 fused_ordering(38) 00:18:05.229 fused_ordering(39) 00:18:05.229 fused_ordering(40) 00:18:05.229 fused_ordering(41) 00:18:05.229 fused_ordering(42) 00:18:05.229 fused_ordering(43) 00:18:05.229 fused_ordering(44) 00:18:05.229 fused_ordering(45) 00:18:05.229 fused_ordering(46) 00:18:05.229 fused_ordering(47) 00:18:05.229 fused_ordering(48) 00:18:05.229 fused_ordering(49) 00:18:05.229 fused_ordering(50) 00:18:05.229 fused_ordering(51) 00:18:05.229 fused_ordering(52) 00:18:05.229 fused_ordering(53) 00:18:05.229 fused_ordering(54) 00:18:05.229 fused_ordering(55) 00:18:05.229 fused_ordering(56) 00:18:05.229 fused_ordering(57) 00:18:05.229 fused_ordering(58) 00:18:05.229 fused_ordering(59) 00:18:05.229 fused_ordering(60) 00:18:05.229 fused_ordering(61) 00:18:05.229 fused_ordering(62) 00:18:05.229 fused_ordering(63) 00:18:05.229 fused_ordering(64) 00:18:05.229 fused_ordering(65) 00:18:05.229 fused_ordering(66) 00:18:05.229 fused_ordering(67) 00:18:05.229 fused_ordering(68) 00:18:05.229 fused_ordering(69) 00:18:05.229 fused_ordering(70) 00:18:05.229 fused_ordering(71) 00:18:05.229 fused_ordering(72) 00:18:05.229 fused_ordering(73) 00:18:05.229 fused_ordering(74) 00:18:05.229 fused_ordering(75) 00:18:05.229 fused_ordering(76) 00:18:05.229 fused_ordering(77) 00:18:05.229 fused_ordering(78) 00:18:05.229 fused_ordering(79) 00:18:05.229 fused_ordering(80) 00:18:05.229 fused_ordering(81) 00:18:05.229 fused_ordering(82) 00:18:05.229 fused_ordering(83) 00:18:05.229 fused_ordering(84) 00:18:05.229 fused_ordering(85) 00:18:05.229 fused_ordering(86) 00:18:05.229 fused_ordering(87) 00:18:05.229 fused_ordering(88) 00:18:05.229 fused_ordering(89) 00:18:05.229 fused_ordering(90) 00:18:05.229 fused_ordering(91) 00:18:05.229 fused_ordering(92) 00:18:05.229 fused_ordering(93) 00:18:05.229 fused_ordering(94) 00:18:05.229 fused_ordering(95) 00:18:05.229 fused_ordering(96) 00:18:05.229 fused_ordering(97) 00:18:05.229 fused_ordering(98) 00:18:05.229 fused_ordering(99) 00:18:05.229 fused_ordering(100) 00:18:05.229 fused_ordering(101) 00:18:05.229 fused_ordering(102) 00:18:05.229 fused_ordering(103) 00:18:05.229 fused_ordering(104) 00:18:05.229 fused_ordering(105) 00:18:05.229 fused_ordering(106) 00:18:05.229 fused_ordering(107) 00:18:05.229 fused_ordering(108) 00:18:05.229 fused_ordering(109) 00:18:05.229 fused_ordering(110) 00:18:05.229 fused_ordering(111) 00:18:05.229 fused_ordering(112) 00:18:05.229 fused_ordering(113) 00:18:05.229 fused_ordering(114) 00:18:05.229 fused_ordering(115) 00:18:05.229 fused_ordering(116) 00:18:05.229 fused_ordering(117) 00:18:05.229 fused_ordering(118) 00:18:05.229 fused_ordering(119) 00:18:05.229 fused_ordering(120) 00:18:05.229 fused_ordering(121) 00:18:05.229 fused_ordering(122) 00:18:05.229 fused_ordering(123) 00:18:05.229 fused_ordering(124) 00:18:05.229 fused_ordering(125) 00:18:05.229 fused_ordering(126) 00:18:05.229 fused_ordering(127) 00:18:05.229 fused_ordering(128) 00:18:05.229 fused_ordering(129) 00:18:05.230 fused_ordering(130) 00:18:05.230 fused_ordering(131) 00:18:05.230 fused_ordering(132) 00:18:05.230 fused_ordering(133) 00:18:05.230 fused_ordering(134) 00:18:05.230 fused_ordering(135) 00:18:05.230 fused_ordering(136) 00:18:05.230 fused_ordering(137) 00:18:05.230 fused_ordering(138) 00:18:05.230 fused_ordering(139) 00:18:05.230 fused_ordering(140) 00:18:05.230 fused_ordering(141) 00:18:05.230 fused_ordering(142) 00:18:05.230 fused_ordering(143) 00:18:05.230 fused_ordering(144) 00:18:05.230 fused_ordering(145) 00:18:05.230 fused_ordering(146) 00:18:05.230 fused_ordering(147) 00:18:05.230 fused_ordering(148) 00:18:05.230 fused_ordering(149) 00:18:05.230 fused_ordering(150) 00:18:05.230 fused_ordering(151) 00:18:05.230 fused_ordering(152) 00:18:05.230 fused_ordering(153) 00:18:05.230 fused_ordering(154) 00:18:05.230 fused_ordering(155) 00:18:05.230 fused_ordering(156) 00:18:05.230 fused_ordering(157) 00:18:05.230 fused_ordering(158) 00:18:05.230 fused_ordering(159) 00:18:05.230 fused_ordering(160) 00:18:05.230 fused_ordering(161) 00:18:05.230 fused_ordering(162) 00:18:05.230 fused_ordering(163) 00:18:05.230 fused_ordering(164) 00:18:05.230 fused_ordering(165) 00:18:05.230 fused_ordering(166) 00:18:05.230 fused_ordering(167) 00:18:05.230 fused_ordering(168) 00:18:05.230 fused_ordering(169) 00:18:05.230 fused_ordering(170) 00:18:05.230 fused_ordering(171) 00:18:05.230 fused_ordering(172) 00:18:05.230 fused_ordering(173) 00:18:05.230 fused_ordering(174) 00:18:05.230 fused_ordering(175) 00:18:05.230 fused_ordering(176) 00:18:05.230 fused_ordering(177) 00:18:05.230 fused_ordering(178) 00:18:05.230 fused_ordering(179) 00:18:05.230 fused_ordering(180) 00:18:05.230 fused_ordering(181) 00:18:05.230 fused_ordering(182) 00:18:05.230 fused_ordering(183) 00:18:05.230 fused_ordering(184) 00:18:05.230 fused_ordering(185) 00:18:05.230 fused_ordering(186) 00:18:05.230 fused_ordering(187) 00:18:05.230 fused_ordering(188) 00:18:05.230 fused_ordering(189) 00:18:05.230 fused_ordering(190) 00:18:05.230 fused_ordering(191) 00:18:05.230 fused_ordering(192) 00:18:05.230 fused_ordering(193) 00:18:05.230 fused_ordering(194) 00:18:05.230 fused_ordering(195) 00:18:05.230 fused_ordering(196) 00:18:05.230 fused_ordering(197) 00:18:05.230 fused_ordering(198) 00:18:05.230 fused_ordering(199) 00:18:05.230 fused_ordering(200) 00:18:05.230 fused_ordering(201) 00:18:05.230 fused_ordering(202) 00:18:05.230 fused_ordering(203) 00:18:05.230 fused_ordering(204) 00:18:05.230 fused_ordering(205) 00:18:05.863 fused_ordering(206) 00:18:05.863 fused_ordering(207) 00:18:05.863 fused_ordering(208) 00:18:05.863 fused_ordering(209) 00:18:05.863 fused_ordering(210) 00:18:05.863 fused_ordering(211) 00:18:05.863 fused_ordering(212) 00:18:05.863 fused_ordering(213) 00:18:05.863 fused_ordering(214) 00:18:05.863 fused_ordering(215) 00:18:05.863 fused_ordering(216) 00:18:05.863 fused_ordering(217) 00:18:05.863 fused_ordering(218) 00:18:05.863 fused_ordering(219) 00:18:05.863 fused_ordering(220) 00:18:05.863 fused_ordering(221) 00:18:05.863 fused_ordering(222) 00:18:05.863 fused_ordering(223) 00:18:05.863 fused_ordering(224) 00:18:05.863 fused_ordering(225) 00:18:05.863 fused_ordering(226) 00:18:05.863 fused_ordering(227) 00:18:05.863 fused_ordering(228) 00:18:05.863 fused_ordering(229) 00:18:05.863 fused_ordering(230) 00:18:05.863 fused_ordering(231) 00:18:05.863 fused_ordering(232) 00:18:05.863 fused_ordering(233) 00:18:05.863 fused_ordering(234) 00:18:05.863 fused_ordering(235) 00:18:05.863 fused_ordering(236) 00:18:05.863 fused_ordering(237) 00:18:05.863 fused_ordering(238) 00:18:05.863 fused_ordering(239) 00:18:05.863 fused_ordering(240) 00:18:05.863 fused_ordering(241) 00:18:05.863 fused_ordering(242) 00:18:05.863 fused_ordering(243) 00:18:05.863 fused_ordering(244) 00:18:05.863 fused_ordering(245) 00:18:05.863 fused_ordering(246) 00:18:05.863 fused_ordering(247) 00:18:05.863 fused_ordering(248) 00:18:05.863 fused_ordering(249) 00:18:05.863 fused_ordering(250) 00:18:05.863 fused_ordering(251) 00:18:05.863 fused_ordering(252) 00:18:05.863 fused_ordering(253) 00:18:05.863 fused_ordering(254) 00:18:05.863 fused_ordering(255) 00:18:05.863 fused_ordering(256) 00:18:05.863 fused_ordering(257) 00:18:05.863 fused_ordering(258) 00:18:05.863 fused_ordering(259) 00:18:05.863 fused_ordering(260) 00:18:05.863 fused_ordering(261) 00:18:05.863 fused_ordering(262) 00:18:05.863 fused_ordering(263) 00:18:05.863 fused_ordering(264) 00:18:05.863 fused_ordering(265) 00:18:05.863 fused_ordering(266) 00:18:05.863 fused_ordering(267) 00:18:05.863 fused_ordering(268) 00:18:05.863 fused_ordering(269) 00:18:05.863 fused_ordering(270) 00:18:05.863 fused_ordering(271) 00:18:05.863 fused_ordering(272) 00:18:05.863 fused_ordering(273) 00:18:05.863 fused_ordering(274) 00:18:05.863 fused_ordering(275) 00:18:05.863 fused_ordering(276) 00:18:05.863 fused_ordering(277) 00:18:05.863 fused_ordering(278) 00:18:05.863 fused_ordering(279) 00:18:05.863 fused_ordering(280) 00:18:05.863 fused_ordering(281) 00:18:05.863 fused_ordering(282) 00:18:05.863 fused_ordering(283) 00:18:05.863 fused_ordering(284) 00:18:05.863 fused_ordering(285) 00:18:05.863 fused_ordering(286) 00:18:05.863 fused_ordering(287) 00:18:05.863 fused_ordering(288) 00:18:05.863 fused_ordering(289) 00:18:05.863 fused_ordering(290) 00:18:05.863 fused_ordering(291) 00:18:05.863 fused_ordering(292) 00:18:05.863 fused_ordering(293) 00:18:05.863 fused_ordering(294) 00:18:05.863 fused_ordering(295) 00:18:05.863 fused_ordering(296) 00:18:05.863 fused_ordering(297) 00:18:05.863 fused_ordering(298) 00:18:05.863 fused_ordering(299) 00:18:05.863 fused_ordering(300) 00:18:05.863 fused_ordering(301) 00:18:05.863 fused_ordering(302) 00:18:05.863 fused_ordering(303) 00:18:05.863 fused_ordering(304) 00:18:05.863 fused_ordering(305) 00:18:05.863 fused_ordering(306) 00:18:05.863 fused_ordering(307) 00:18:05.863 fused_ordering(308) 00:18:05.863 fused_ordering(309) 00:18:05.863 fused_ordering(310) 00:18:05.863 fused_ordering(311) 00:18:05.863 fused_ordering(312) 00:18:05.863 fused_ordering(313) 00:18:05.863 fused_ordering(314) 00:18:05.863 fused_ordering(315) 00:18:05.863 fused_ordering(316) 00:18:05.863 fused_ordering(317) 00:18:05.863 fused_ordering(318) 00:18:05.863 fused_ordering(319) 00:18:05.863 fused_ordering(320) 00:18:05.863 fused_ordering(321) 00:18:05.863 fused_ordering(322) 00:18:05.863 fused_ordering(323) 00:18:05.863 fused_ordering(324) 00:18:05.863 fused_ordering(325) 00:18:05.863 fused_ordering(326) 00:18:05.863 fused_ordering(327) 00:18:05.863 fused_ordering(328) 00:18:05.863 fused_ordering(329) 00:18:05.863 fused_ordering(330) 00:18:05.863 fused_ordering(331) 00:18:05.863 fused_ordering(332) 00:18:05.863 fused_ordering(333) 00:18:05.863 fused_ordering(334) 00:18:05.863 fused_ordering(335) 00:18:05.863 fused_ordering(336) 00:18:05.863 fused_ordering(337) 00:18:05.863 fused_ordering(338) 00:18:05.863 fused_ordering(339) 00:18:05.863 fused_ordering(340) 00:18:05.863 fused_ordering(341) 00:18:05.863 fused_ordering(342) 00:18:05.863 fused_ordering(343) 00:18:05.863 fused_ordering(344) 00:18:05.863 fused_ordering(345) 00:18:05.863 fused_ordering(346) 00:18:05.863 fused_ordering(347) 00:18:05.863 fused_ordering(348) 00:18:05.863 fused_ordering(349) 00:18:05.863 fused_ordering(350) 00:18:05.863 fused_ordering(351) 00:18:05.863 fused_ordering(352) 00:18:05.863 fused_ordering(353) 00:18:05.863 fused_ordering(354) 00:18:05.863 fused_ordering(355) 00:18:05.863 fused_ordering(356) 00:18:05.863 fused_ordering(357) 00:18:05.863 fused_ordering(358) 00:18:05.863 fused_ordering(359) 00:18:05.863 fused_ordering(360) 00:18:05.863 fused_ordering(361) 00:18:05.863 fused_ordering(362) 00:18:05.863 fused_ordering(363) 00:18:05.863 fused_ordering(364) 00:18:05.863 fused_ordering(365) 00:18:05.863 fused_ordering(366) 00:18:05.863 fused_ordering(367) 00:18:05.863 fused_ordering(368) 00:18:05.863 fused_ordering(369) 00:18:05.863 fused_ordering(370) 00:18:05.863 fused_ordering(371) 00:18:05.863 fused_ordering(372) 00:18:05.863 fused_ordering(373) 00:18:05.863 fused_ordering(374) 00:18:05.863 fused_ordering(375) 00:18:05.863 fused_ordering(376) 00:18:05.863 fused_ordering(377) 00:18:05.863 fused_ordering(378) 00:18:05.863 fused_ordering(379) 00:18:05.863 fused_ordering(380) 00:18:05.863 fused_ordering(381) 00:18:05.863 fused_ordering(382) 00:18:05.863 fused_ordering(383) 00:18:05.863 fused_ordering(384) 00:18:05.863 fused_ordering(385) 00:18:05.863 fused_ordering(386) 00:18:05.863 fused_ordering(387) 00:18:05.863 fused_ordering(388) 00:18:05.863 fused_ordering(389) 00:18:05.863 fused_ordering(390) 00:18:05.863 fused_ordering(391) 00:18:05.863 fused_ordering(392) 00:18:05.863 fused_ordering(393) 00:18:05.863 fused_ordering(394) 00:18:05.863 fused_ordering(395) 00:18:05.863 fused_ordering(396) 00:18:05.863 fused_ordering(397) 00:18:05.863 fused_ordering(398) 00:18:05.863 fused_ordering(399) 00:18:05.863 fused_ordering(400) 00:18:05.863 fused_ordering(401) 00:18:05.863 fused_ordering(402) 00:18:05.863 fused_ordering(403) 00:18:05.863 fused_ordering(404) 00:18:05.863 fused_ordering(405) 00:18:05.863 fused_ordering(406) 00:18:05.863 fused_ordering(407) 00:18:05.863 fused_ordering(408) 00:18:05.863 fused_ordering(409) 00:18:05.863 fused_ordering(410) 00:18:06.799 fused_ordering(411) 00:18:06.799 fused_ordering(412) 00:18:06.799 fused_ordering(413) 00:18:06.799 fused_ordering(414) 00:18:06.799 fused_ordering(415) 00:18:06.799 fused_ordering(416) 00:18:06.799 fused_ordering(417) 00:18:06.799 fused_ordering(418) 00:18:06.799 fused_ordering(419) 00:18:06.799 fused_ordering(420) 00:18:06.799 fused_ordering(421) 00:18:06.799 fused_ordering(422) 00:18:06.799 fused_ordering(423) 00:18:06.799 fused_ordering(424) 00:18:06.799 fused_ordering(425) 00:18:06.799 fused_ordering(426) 00:18:06.799 fused_ordering(427) 00:18:06.799 fused_ordering(428) 00:18:06.799 fused_ordering(429) 00:18:06.799 fused_ordering(430) 00:18:06.799 fused_ordering(431) 00:18:06.799 fused_ordering(432) 00:18:06.799 fused_ordering(433) 00:18:06.799 fused_ordering(434) 00:18:06.799 fused_ordering(435) 00:18:06.799 fused_ordering(436) 00:18:06.799 fused_ordering(437) 00:18:06.799 fused_ordering(438) 00:18:06.799 fused_ordering(439) 00:18:06.799 fused_ordering(440) 00:18:06.799 fused_ordering(441) 00:18:06.799 fused_ordering(442) 00:18:06.799 fused_ordering(443) 00:18:06.799 fused_ordering(444) 00:18:06.799 fused_ordering(445) 00:18:06.799 fused_ordering(446) 00:18:06.799 fused_ordering(447) 00:18:06.799 fused_ordering(448) 00:18:06.799 fused_ordering(449) 00:18:06.799 fused_ordering(450) 00:18:06.799 fused_ordering(451) 00:18:06.799 fused_ordering(452) 00:18:06.799 fused_ordering(453) 00:18:06.799 fused_ordering(454) 00:18:06.799 fused_ordering(455) 00:18:06.799 fused_ordering(456) 00:18:06.799 fused_ordering(457) 00:18:06.799 fused_ordering(458) 00:18:06.799 fused_ordering(459) 00:18:06.799 fused_ordering(460) 00:18:06.799 fused_ordering(461) 00:18:06.799 fused_ordering(462) 00:18:06.799 fused_ordering(463) 00:18:06.799 fused_ordering(464) 00:18:06.799 fused_ordering(465) 00:18:06.799 fused_ordering(466) 00:18:06.799 fused_ordering(467) 00:18:06.799 fused_ordering(468) 00:18:06.799 fused_ordering(469) 00:18:06.799 fused_ordering(470) 00:18:06.799 fused_ordering(471) 00:18:06.799 fused_ordering(472) 00:18:06.799 fused_ordering(473) 00:18:06.799 fused_ordering(474) 00:18:06.799 fused_ordering(475) 00:18:06.799 fused_ordering(476) 00:18:06.799 fused_ordering(477) 00:18:06.799 fused_ordering(478) 00:18:06.799 fused_ordering(479) 00:18:06.799 fused_ordering(480) 00:18:06.799 fused_ordering(481) 00:18:06.799 fused_ordering(482) 00:18:06.799 fused_ordering(483) 00:18:06.799 fused_ordering(484) 00:18:06.799 fused_ordering(485) 00:18:06.799 fused_ordering(486) 00:18:06.799 fused_ordering(487) 00:18:06.799 fused_ordering(488) 00:18:06.799 fused_ordering(489) 00:18:06.799 fused_ordering(490) 00:18:06.799 fused_ordering(491) 00:18:06.799 fused_ordering(492) 00:18:06.799 fused_ordering(493) 00:18:06.799 fused_ordering(494) 00:18:06.799 fused_ordering(495) 00:18:06.799 fused_ordering(496) 00:18:06.799 fused_ordering(497) 00:18:06.799 fused_ordering(498) 00:18:06.799 fused_ordering(499) 00:18:06.799 fused_ordering(500) 00:18:06.799 fused_ordering(501) 00:18:06.799 fused_ordering(502) 00:18:06.799 fused_ordering(503) 00:18:06.799 fused_ordering(504) 00:18:06.799 fused_ordering(505) 00:18:06.799 fused_ordering(506) 00:18:06.799 fused_ordering(507) 00:18:06.799 fused_ordering(508) 00:18:06.799 fused_ordering(509) 00:18:06.799 fused_ordering(510) 00:18:06.799 fused_ordering(511) 00:18:06.799 fused_ordering(512) 00:18:06.799 fused_ordering(513) 00:18:06.799 fused_ordering(514) 00:18:06.799 fused_ordering(515) 00:18:06.799 fused_ordering(516) 00:18:06.799 fused_ordering(517) 00:18:06.799 fused_ordering(518) 00:18:06.799 fused_ordering(519) 00:18:06.799 fused_ordering(520) 00:18:06.799 fused_ordering(521) 00:18:06.799 fused_ordering(522) 00:18:06.799 fused_ordering(523) 00:18:06.799 fused_ordering(524) 00:18:06.799 fused_ordering(525) 00:18:06.799 fused_ordering(526) 00:18:06.799 fused_ordering(527) 00:18:06.799 fused_ordering(528) 00:18:06.799 fused_ordering(529) 00:18:06.799 fused_ordering(530) 00:18:06.799 fused_ordering(531) 00:18:06.799 fused_ordering(532) 00:18:06.799 fused_ordering(533) 00:18:06.799 fused_ordering(534) 00:18:06.799 fused_ordering(535) 00:18:06.799 fused_ordering(536) 00:18:06.799 fused_ordering(537) 00:18:06.799 fused_ordering(538) 00:18:06.799 fused_ordering(539) 00:18:06.799 fused_ordering(540) 00:18:06.799 fused_ordering(541) 00:18:06.799 fused_ordering(542) 00:18:06.799 fused_ordering(543) 00:18:06.799 fused_ordering(544) 00:18:06.799 fused_ordering(545) 00:18:06.799 fused_ordering(546) 00:18:06.799 fused_ordering(547) 00:18:06.799 fused_ordering(548) 00:18:06.799 fused_ordering(549) 00:18:06.799 fused_ordering(550) 00:18:06.799 fused_ordering(551) 00:18:06.799 fused_ordering(552) 00:18:06.799 fused_ordering(553) 00:18:06.799 fused_ordering(554) 00:18:06.799 fused_ordering(555) 00:18:06.799 fused_ordering(556) 00:18:06.799 fused_ordering(557) 00:18:06.799 fused_ordering(558) 00:18:06.799 fused_ordering(559) 00:18:06.799 fused_ordering(560) 00:18:06.799 fused_ordering(561) 00:18:06.799 fused_ordering(562) 00:18:06.799 fused_ordering(563) 00:18:06.799 fused_ordering(564) 00:18:06.799 fused_ordering(565) 00:18:06.799 fused_ordering(566) 00:18:06.799 fused_ordering(567) 00:18:06.799 fused_ordering(568) 00:18:06.799 fused_ordering(569) 00:18:06.799 fused_ordering(570) 00:18:06.799 fused_ordering(571) 00:18:06.799 fused_ordering(572) 00:18:06.799 fused_ordering(573) 00:18:06.799 fused_ordering(574) 00:18:06.799 fused_ordering(575) 00:18:06.799 fused_ordering(576) 00:18:06.799 fused_ordering(577) 00:18:06.799 fused_ordering(578) 00:18:06.799 fused_ordering(579) 00:18:06.799 fused_ordering(580) 00:18:06.799 fused_ordering(581) 00:18:06.799 fused_ordering(582) 00:18:06.799 fused_ordering(583) 00:18:06.799 fused_ordering(584) 00:18:06.799 fused_ordering(585) 00:18:06.799 fused_ordering(586) 00:18:06.800 fused_ordering(587) 00:18:06.800 fused_ordering(588) 00:18:06.800 fused_ordering(589) 00:18:06.800 fused_ordering(590) 00:18:06.800 fused_ordering(591) 00:18:06.800 fused_ordering(592) 00:18:06.800 fused_ordering(593) 00:18:06.800 fused_ordering(594) 00:18:06.800 fused_ordering(595) 00:18:06.800 fused_ordering(596) 00:18:06.800 fused_ordering(597) 00:18:06.800 fused_ordering(598) 00:18:06.800 fused_ordering(599) 00:18:06.800 fused_ordering(600) 00:18:06.800 fused_ordering(601) 00:18:06.800 fused_ordering(602) 00:18:06.800 fused_ordering(603) 00:18:06.800 fused_ordering(604) 00:18:06.800 fused_ordering(605) 00:18:06.800 fused_ordering(606) 00:18:06.800 fused_ordering(607) 00:18:06.800 fused_ordering(608) 00:18:06.800 fused_ordering(609) 00:18:06.800 fused_ordering(610) 00:18:06.800 fused_ordering(611) 00:18:06.800 fused_ordering(612) 00:18:06.800 fused_ordering(613) 00:18:06.800 fused_ordering(614) 00:18:06.800 fused_ordering(615) 00:18:08.176 fused_ordering(616) 00:18:08.176 fused_ordering(617) 00:18:08.176 fused_ordering(618) 00:18:08.176 fused_ordering(619) 00:18:08.176 fused_ordering(620) 00:18:08.176 fused_ordering(621) 00:18:08.176 fused_ordering(622) 00:18:08.176 fused_ordering(623) 00:18:08.176 fused_ordering(624) 00:18:08.176 fused_ordering(625) 00:18:08.176 fused_ordering(626) 00:18:08.176 fused_ordering(627) 00:18:08.176 fused_ordering(628) 00:18:08.176 fused_ordering(629) 00:18:08.176 fused_ordering(630) 00:18:08.176 fused_ordering(631) 00:18:08.176 fused_ordering(632) 00:18:08.176 fused_ordering(633) 00:18:08.176 fused_ordering(634) 00:18:08.176 fused_ordering(635) 00:18:08.176 fused_ordering(636) 00:18:08.176 fused_ordering(637) 00:18:08.176 fused_ordering(638) 00:18:08.176 fused_ordering(639) 00:18:08.176 fused_ordering(640) 00:18:08.176 fused_ordering(641) 00:18:08.176 fused_ordering(642) 00:18:08.176 fused_ordering(643) 00:18:08.176 fused_ordering(644) 00:18:08.176 fused_ordering(645) 00:18:08.176 fused_ordering(646) 00:18:08.176 fused_ordering(647) 00:18:08.176 fused_ordering(648) 00:18:08.176 fused_ordering(649) 00:18:08.176 fused_ordering(650) 00:18:08.176 fused_ordering(651) 00:18:08.176 fused_ordering(652) 00:18:08.176 fused_ordering(653) 00:18:08.176 fused_ordering(654) 00:18:08.176 fused_ordering(655) 00:18:08.176 fused_ordering(656) 00:18:08.176 fused_ordering(657) 00:18:08.176 fused_ordering(658) 00:18:08.176 fused_ordering(659) 00:18:08.176 fused_ordering(660) 00:18:08.176 fused_ordering(661) 00:18:08.176 fused_ordering(662) 00:18:08.176 fused_ordering(663) 00:18:08.177 fused_ordering(664) 00:18:08.177 fused_ordering(665) 00:18:08.177 fused_ordering(666) 00:18:08.177 fused_ordering(667) 00:18:08.177 fused_ordering(668) 00:18:08.177 fused_ordering(669) 00:18:08.177 fused_ordering(670) 00:18:08.177 fused_ordering(671) 00:18:08.177 fused_ordering(672) 00:18:08.177 fused_ordering(673) 00:18:08.177 fused_ordering(674) 00:18:08.177 fused_ordering(675) 00:18:08.177 fused_ordering(676) 00:18:08.177 fused_ordering(677) 00:18:08.177 fused_ordering(678) 00:18:08.177 fused_ordering(679) 00:18:08.177 fused_ordering(680) 00:18:08.177 fused_ordering(681) 00:18:08.177 fused_ordering(682) 00:18:08.177 fused_ordering(683) 00:18:08.177 fused_ordering(684) 00:18:08.177 fused_ordering(685) 00:18:08.177 fused_ordering(686) 00:18:08.177 fused_ordering(687) 00:18:08.177 fused_ordering(688) 00:18:08.177 fused_ordering(689) 00:18:08.177 fused_ordering(690) 00:18:08.177 fused_ordering(691) 00:18:08.177 fused_ordering(692) 00:18:08.177 fused_ordering(693) 00:18:08.177 fused_ordering(694) 00:18:08.177 fused_ordering(695) 00:18:08.177 fused_ordering(696) 00:18:08.177 fused_ordering(697) 00:18:08.177 fused_ordering(698) 00:18:08.177 fused_ordering(699) 00:18:08.177 fused_ordering(700) 00:18:08.177 fused_ordering(701) 00:18:08.177 fused_ordering(702) 00:18:08.177 fused_ordering(703) 00:18:08.177 fused_ordering(704) 00:18:08.177 fused_ordering(705) 00:18:08.177 fused_ordering(706) 00:18:08.177 fused_ordering(707) 00:18:08.177 fused_ordering(708) 00:18:08.177 fused_ordering(709) 00:18:08.177 fused_ordering(710) 00:18:08.177 fused_ordering(711) 00:18:08.177 fused_ordering(712) 00:18:08.177 fused_ordering(713) 00:18:08.177 fused_ordering(714) 00:18:08.177 fused_ordering(715) 00:18:08.177 fused_ordering(716) 00:18:08.177 fused_ordering(717) 00:18:08.177 fused_ordering(718) 00:18:08.177 fused_ordering(719) 00:18:08.177 fused_ordering(720) 00:18:08.177 fused_ordering(721) 00:18:08.177 fused_ordering(722) 00:18:08.177 fused_ordering(723) 00:18:08.177 fused_ordering(724) 00:18:08.177 fused_ordering(725) 00:18:08.177 fused_ordering(726) 00:18:08.177 fused_ordering(727) 00:18:08.177 fused_ordering(728) 00:18:08.177 fused_ordering(729) 00:18:08.177 fused_ordering(730) 00:18:08.177 fused_ordering(731) 00:18:08.177 fused_ordering(732) 00:18:08.177 fused_ordering(733) 00:18:08.177 fused_ordering(734) 00:18:08.177 fused_ordering(735) 00:18:08.177 fused_ordering(736) 00:18:08.177 fused_ordering(737) 00:18:08.177 fused_ordering(738) 00:18:08.177 fused_ordering(739) 00:18:08.177 fused_ordering(740) 00:18:08.177 fused_ordering(741) 00:18:08.177 fused_ordering(742) 00:18:08.177 fused_ordering(743) 00:18:08.177 fused_ordering(744) 00:18:08.177 fused_ordering(745) 00:18:08.177 fused_ordering(746) 00:18:08.177 fused_ordering(747) 00:18:08.177 fused_ordering(748) 00:18:08.177 fused_ordering(749) 00:18:08.177 fused_ordering(750) 00:18:08.177 fused_ordering(751) 00:18:08.177 fused_ordering(752) 00:18:08.177 fused_ordering(753) 00:18:08.177 fused_ordering(754) 00:18:08.177 fused_ordering(755) 00:18:08.177 fused_ordering(756) 00:18:08.177 fused_ordering(757) 00:18:08.177 fused_ordering(758) 00:18:08.177 fused_ordering(759) 00:18:08.177 fused_ordering(760) 00:18:08.177 fused_ordering(761) 00:18:08.177 fused_ordering(762) 00:18:08.177 fused_ordering(763) 00:18:08.177 fused_ordering(764) 00:18:08.177 fused_ordering(765) 00:18:08.177 fused_ordering(766) 00:18:08.177 fused_ordering(767) 00:18:08.177 fused_ordering(768) 00:18:08.177 fused_ordering(769) 00:18:08.177 fused_ordering(770) 00:18:08.177 fused_ordering(771) 00:18:08.177 fused_ordering(772) 00:18:08.177 fused_ordering(773) 00:18:08.177 fused_ordering(774) 00:18:08.177 fused_ordering(775) 00:18:08.177 fused_ordering(776) 00:18:08.177 fused_ordering(777) 00:18:08.177 fused_ordering(778) 00:18:08.177 fused_ordering(779) 00:18:08.177 fused_ordering(780) 00:18:08.177 fused_ordering(781) 00:18:08.177 fused_ordering(782) 00:18:08.177 fused_ordering(783) 00:18:08.177 fused_ordering(784) 00:18:08.177 fused_ordering(785) 00:18:08.177 fused_ordering(786) 00:18:08.177 fused_ordering(787) 00:18:08.177 fused_ordering(788) 00:18:08.177 fused_ordering(789) 00:18:08.177 fused_ordering(790) 00:18:08.177 fused_ordering(791) 00:18:08.177 fused_ordering(792) 00:18:08.177 fused_ordering(793) 00:18:08.177 fused_ordering(794) 00:18:08.177 fused_ordering(795) 00:18:08.177 fused_ordering(796) 00:18:08.177 fused_ordering(797) 00:18:08.177 fused_ordering(798) 00:18:08.177 fused_ordering(799) 00:18:08.177 fused_ordering(800) 00:18:08.177 fused_ordering(801) 00:18:08.177 fused_ordering(802) 00:18:08.177 fused_ordering(803) 00:18:08.177 fused_ordering(804) 00:18:08.177 fused_ordering(805) 00:18:08.177 fused_ordering(806) 00:18:08.177 fused_ordering(807) 00:18:08.177 fused_ordering(808) 00:18:08.177 fused_ordering(809) 00:18:08.177 fused_ordering(810) 00:18:08.177 fused_ordering(811) 00:18:08.177 fused_ordering(812) 00:18:08.177 fused_ordering(813) 00:18:08.177 fused_ordering(814) 00:18:08.177 fused_ordering(815) 00:18:08.177 fused_ordering(816) 00:18:08.177 fused_ordering(817) 00:18:08.177 fused_ordering(818) 00:18:08.177 fused_ordering(819) 00:18:08.177 fused_ordering(820) 00:18:10.080 fused_ordering(821) 00:18:10.080 fused_ordering(822) 00:18:10.080 fused_ordering(823) 00:18:10.080 fused_ordering(824) 00:18:10.080 fused_ordering(825) 00:18:10.080 fused_ordering(826) 00:18:10.080 fused_ordering(827) 00:18:10.080 fused_ordering(828) 00:18:10.080 fused_ordering(829) 00:18:10.080 fused_ordering(830) 00:18:10.080 fused_ordering(831) 00:18:10.080 fused_ordering(832) 00:18:10.080 fused_ordering(833) 00:18:10.080 fused_ordering(834) 00:18:10.080 fused_ordering(835) 00:18:10.080 fused_ordering(836) 00:18:10.080 fused_ordering(837) 00:18:10.080 fused_ordering(838) 00:18:10.080 fused_ordering(839) 00:18:10.080 fused_ordering(840) 00:18:10.080 fused_ordering(841) 00:18:10.080 fused_ordering(842) 00:18:10.080 fused_ordering(843) 00:18:10.080 fused_ordering(844) 00:18:10.080 fused_ordering(845) 00:18:10.080 fused_ordering(846) 00:18:10.080 fused_ordering(847) 00:18:10.080 fused_ordering(848) 00:18:10.080 fused_ordering(849) 00:18:10.080 fused_ordering(850) 00:18:10.080 fused_ordering(851) 00:18:10.080 fused_ordering(852) 00:18:10.080 fused_ordering(853) 00:18:10.080 fused_ordering(854) 00:18:10.080 fused_ordering(855) 00:18:10.080 fused_ordering(856) 00:18:10.080 fused_ordering(857) 00:18:10.080 fused_ordering(858) 00:18:10.080 fused_ordering(859) 00:18:10.080 fused_ordering(860) 00:18:10.080 fused_ordering(861) 00:18:10.080 fused_ordering(862) 00:18:10.080 fused_ordering(863) 00:18:10.080 fused_ordering(864) 00:18:10.080 fused_ordering(865) 00:18:10.080 fused_ordering(866) 00:18:10.080 fused_ordering(867) 00:18:10.080 fused_ordering(868) 00:18:10.080 fused_ordering(869) 00:18:10.080 fused_ordering(870) 00:18:10.080 fused_ordering(871) 00:18:10.080 fused_ordering(872) 00:18:10.080 fused_ordering(873) 00:18:10.080 fused_ordering(874) 00:18:10.080 fused_ordering(875) 00:18:10.080 fused_ordering(876) 00:18:10.080 fused_ordering(877) 00:18:10.080 fused_ordering(878) 00:18:10.080 fused_ordering(879) 00:18:10.080 fused_ordering(880) 00:18:10.080 fused_ordering(881) 00:18:10.080 fused_ordering(882) 00:18:10.080 fused_ordering(883) 00:18:10.080 fused_ordering(884) 00:18:10.080 fused_ordering(885) 00:18:10.080 fused_ordering(886) 00:18:10.080 fused_ordering(887) 00:18:10.080 fused_ordering(888) 00:18:10.080 fused_ordering(889) 00:18:10.080 fused_ordering(890) 00:18:10.080 fused_ordering(891) 00:18:10.080 fused_ordering(892) 00:18:10.080 fused_ordering(893) 00:18:10.080 fused_ordering(894) 00:18:10.080 fused_ordering(895) 00:18:10.080 fused_ordering(896) 00:18:10.080 fused_ordering(897) 00:18:10.080 fused_ordering(898) 00:18:10.080 fused_ordering(899) 00:18:10.080 fused_ordering(900) 00:18:10.080 fused_ordering(901) 00:18:10.080 fused_ordering(902) 00:18:10.080 fused_ordering(903) 00:18:10.080 fused_ordering(904) 00:18:10.080 fused_ordering(905) 00:18:10.080 fused_ordering(906) 00:18:10.080 fused_ordering(907) 00:18:10.080 fused_ordering(908) 00:18:10.080 fused_ordering(909) 00:18:10.080 fused_ordering(910) 00:18:10.080 fused_ordering(911) 00:18:10.080 fused_ordering(912) 00:18:10.080 fused_ordering(913) 00:18:10.080 fused_ordering(914) 00:18:10.080 fused_ordering(915) 00:18:10.080 fused_ordering(916) 00:18:10.080 fused_ordering(917) 00:18:10.080 fused_ordering(918) 00:18:10.080 fused_ordering(919) 00:18:10.080 fused_ordering(920) 00:18:10.080 fused_ordering(921) 00:18:10.080 fused_ordering(922) 00:18:10.080 fused_ordering(923) 00:18:10.080 fused_ordering(924) 00:18:10.080 fused_ordering(925) 00:18:10.080 fused_ordering(926) 00:18:10.080 fused_ordering(927) 00:18:10.080 fused_ordering(928) 00:18:10.080 fused_ordering(929) 00:18:10.080 fused_ordering(930) 00:18:10.080 fused_ordering(931) 00:18:10.080 fused_ordering(932) 00:18:10.080 fused_ordering(933) 00:18:10.080 fused_ordering(934) 00:18:10.080 fused_ordering(935) 00:18:10.080 fused_ordering(936) 00:18:10.080 fused_ordering(937) 00:18:10.080 fused_ordering(938) 00:18:10.080 fused_ordering(939) 00:18:10.080 fused_ordering(940) 00:18:10.080 fused_ordering(941) 00:18:10.080 fused_ordering(942) 00:18:10.080 fused_ordering(943) 00:18:10.080 fused_ordering(944) 00:18:10.080 fused_ordering(945) 00:18:10.080 fused_ordering(946) 00:18:10.080 fused_ordering(947) 00:18:10.080 fused_ordering(948) 00:18:10.080 fused_ordering(949) 00:18:10.080 fused_ordering(950) 00:18:10.080 fused_ordering(951) 00:18:10.080 fused_ordering(952) 00:18:10.080 fused_ordering(953) 00:18:10.080 fused_ordering(954) 00:18:10.080 fused_ordering(955) 00:18:10.080 fused_ordering(956) 00:18:10.080 fused_ordering(957) 00:18:10.080 fused_ordering(958) 00:18:10.080 fused_ordering(959) 00:18:10.080 fused_ordering(960) 00:18:10.080 fused_ordering(961) 00:18:10.080 fused_ordering(962) 00:18:10.080 fused_ordering(963) 00:18:10.080 fused_ordering(964) 00:18:10.080 fused_ordering(965) 00:18:10.080 fused_ordering(966) 00:18:10.080 fused_ordering(967) 00:18:10.080 fused_ordering(968) 00:18:10.080 fused_ordering(969) 00:18:10.080 fused_ordering(970) 00:18:10.080 fused_ordering(971) 00:18:10.080 fused_ordering(972) 00:18:10.080 fused_ordering(973) 00:18:10.080 fused_ordering(974) 00:18:10.080 fused_ordering(975) 00:18:10.080 fused_ordering(976) 00:18:10.080 fused_ordering(977) 00:18:10.080 fused_ordering(978) 00:18:10.080 fused_ordering(979) 00:18:10.080 fused_ordering(980) 00:18:10.080 fused_ordering(981) 00:18:10.080 fused_ordering(982) 00:18:10.080 fused_ordering(983) 00:18:10.080 fused_ordering(984) 00:18:10.080 fused_ordering(985) 00:18:10.080 fused_ordering(986) 00:18:10.080 fused_ordering(987) 00:18:10.080 fused_ordering(988) 00:18:10.080 fused_ordering(989) 00:18:10.080 fused_ordering(990) 00:18:10.080 fused_ordering(991) 00:18:10.080 fused_ordering(992) 00:18:10.080 fused_ordering(993) 00:18:10.080 fused_ordering(994) 00:18:10.080 fused_ordering(995) 00:18:10.080 fused_ordering(996) 00:18:10.080 fused_ordering(997) 00:18:10.080 fused_ordering(998) 00:18:10.080 fused_ordering(999) 00:18:10.081 fused_ordering(1000) 00:18:10.081 fused_ordering(1001) 00:18:10.081 fused_ordering(1002) 00:18:10.081 fused_ordering(1003) 00:18:10.081 fused_ordering(1004) 00:18:10.081 fused_ordering(1005) 00:18:10.081 fused_ordering(1006) 00:18:10.081 fused_ordering(1007) 00:18:10.081 fused_ordering(1008) 00:18:10.081 fused_ordering(1009) 00:18:10.081 fused_ordering(1010) 00:18:10.081 fused_ordering(1011) 00:18:10.081 fused_ordering(1012) 00:18:10.081 fused_ordering(1013) 00:18:10.081 fused_ordering(1014) 00:18:10.081 fused_ordering(1015) 00:18:10.081 fused_ordering(1016) 00:18:10.081 fused_ordering(1017) 00:18:10.081 fused_ordering(1018) 00:18:10.081 fused_ordering(1019) 00:18:10.081 fused_ordering(1020) 00:18:10.081 fused_ordering(1021) 00:18:10.081 fused_ordering(1022) 00:18:10.081 fused_ordering(1023) 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.081 rmmod nvme_tcp 00:18:10.081 rmmod nvme_fabrics 00:18:10.081 rmmod nvme_keyring 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1688771 ']' 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1688771 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1688771 ']' 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1688771 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688771 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688771' 00:18:10.081 killing process with pid 1688771 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1688771 00:18:10.081 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1688771 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.341 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.250 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.250 00:18:12.250 real 0m12.480s 00:18:12.250 user 0m10.514s 00:18:12.250 sys 0m6.170s 00:18:12.250 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.250 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:12.250 ************************************ 00:18:12.250 END TEST nvmf_fused_ordering 00:18:12.250 ************************************ 00:18:12.508 20:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:12.508 20:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:12.508 20:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.508 20:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.508 ************************************ 00:18:12.509 START TEST nvmf_ns_masking 00:18:12.509 ************************************ 00:18:12.509 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:12.509 * Looking for test storage... 00:18:12.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.509 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:12.509 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:12.509 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:12.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.768 --rc genhtml_branch_coverage=1 00:18:12.768 --rc genhtml_function_coverage=1 00:18:12.768 --rc genhtml_legend=1 00:18:12.768 --rc geninfo_all_blocks=1 00:18:12.768 --rc geninfo_unexecuted_blocks=1 00:18:12.768 00:18:12.768 ' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:12.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.768 --rc genhtml_branch_coverage=1 00:18:12.768 --rc genhtml_function_coverage=1 00:18:12.768 --rc genhtml_legend=1 00:18:12.768 --rc geninfo_all_blocks=1 00:18:12.768 --rc geninfo_unexecuted_blocks=1 00:18:12.768 00:18:12.768 ' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:12.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.768 --rc genhtml_branch_coverage=1 00:18:12.768 --rc genhtml_function_coverage=1 00:18:12.768 --rc genhtml_legend=1 00:18:12.768 --rc geninfo_all_blocks=1 00:18:12.768 --rc geninfo_unexecuted_blocks=1 00:18:12.768 00:18:12.768 ' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:12.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.768 --rc genhtml_branch_coverage=1 00:18:12.768 --rc genhtml_function_coverage=1 00:18:12.768 --rc genhtml_legend=1 00:18:12.768 --rc geninfo_all_blocks=1 00:18:12.768 --rc geninfo_unexecuted_blocks=1 00:18:12.768 00:18:12.768 ' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:12.768 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=025c72ea-ce4a-4627-97e2-1c3480c52af5 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fa7b6ca2-6355-4fef-a930-1e4d8c3c9ffa 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0ab0e532-b6c7-403a-8cb4-862c7a75816a 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.769 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.060 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:16.061 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:16.061 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:16.061 Found net devices under 0000:84:00.0: cvl_0_0 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:16.061 Found net devices under 0000:84:00.1: cvl_0_1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:16.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:18:16.061 00:18:16.061 --- 10.0.0.2 ping statistics --- 00:18:16.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.061 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:18:16.061 00:18:16.061 --- 10.0.0.1 ping statistics --- 00:18:16.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.061 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:18:16.061 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1691632 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1691632 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1691632 ']' 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.062 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.062 [2024-10-08 20:45:44.707913] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:16.062 [2024-10-08 20:45:44.708084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.321 [2024-10-08 20:45:44.872994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.580 [2024-10-08 20:45:45.097936] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.580 [2024-10-08 20:45:45.098043] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.580 [2024-10-08 20:45:45.098078] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.580 [2024-10-08 20:45:45.098107] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.580 [2024-10-08 20:45:45.098135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.580 [2024-10-08 20:45:45.099470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.519 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.089 [2024-10-08 20:45:46.845769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.349 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:18.349 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:18.349 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:18.916 Malloc1 00:18:18.916 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:19.483 Malloc2 00:18:19.483 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.057 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:20.623 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.882 [2024-10-08 20:45:49.526459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.882 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:20.882 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ab0e532-b6c7-403a-8cb4-862c7a75816a -a 10.0.0.2 -s 4420 -i 4 00:18:21.141 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.141 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:21.141 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.141 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:21.141 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.060 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.319 [ 0]:0x1 00:18:23.319 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.319 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.319 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5cdf6a2cff874c978b405c244b6a0052 00:18:23.319 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5cdf6a2cff874c978b405c244b6a0052 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.319 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.578 [ 0]:0x1 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5cdf6a2cff874c978b405c244b6a0052 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5cdf6a2cff874c978b405c244b6a0052 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.578 [ 1]:0x2 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.578 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.837 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:23.837 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.837 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:23.837 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.837 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.095 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ab0e532-b6c7-403a-8cb4-862c7a75816a -a 10.0.0.2 -s 4420 -i 4 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:24.662 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:26.563 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:26.563 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:26.563 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.822 [ 0]:0x2 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.822 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.760 [ 0]:0x1 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5cdf6a2cff874c978b405c244b6a0052 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5cdf6a2cff874c978b405c244b6a0052 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.760 [ 1]:0x2 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.760 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.328 [ 0]:0x2 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.328 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.586 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:28.586 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.586 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:28.586 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.586 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.843 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:28.843 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ab0e532-b6c7-403a-8cb4-862c7a75816a -a 10.0.0.2 -s 4420 -i 4 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:29.103 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:31.007 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:31.007 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:31.007 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.008 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:31.008 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.008 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:31.008 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:31.008 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.265 [ 0]:0x1 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5cdf6a2cff874c978b405c244b6a0052 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5cdf6a2cff874c978b405c244b6a0052 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.265 [ 1]:0x2 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.265 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.265 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:31.265 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.265 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.832 [ 0]:0x2 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:31.832 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:32.400 [2024-10-08 20:46:01.142180] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:32.400 request: 00:18:32.400 { 00:18:32.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.400 "nsid": 2, 00:18:32.400 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.400 "method": "nvmf_ns_remove_host", 00:18:32.400 "req_id": 1 00:18:32.400 } 00:18:32.400 Got JSON-RPC error response 00:18:32.400 response: 00:18:32.400 { 00:18:32.400 "code": -32602, 00:18:32.400 "message": "Invalid parameters" 00:18:32.400 } 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.661 [ 0]:0x2 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d031eca2ac234fc78b66ad27348f4955 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d031eca2ac234fc78b66ad27348f4955 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:32.661 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1693681 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1693681 /var/tmp/host.sock 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1693681 ']' 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:32.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.921 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.921 [2024-10-08 20:46:01.563085] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:32.921 [2024-10-08 20:46:01.563195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693681 ] 00:18:32.921 [2024-10-08 20:46:01.667730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.180 [2024-10-08 20:46:01.882268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.747 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.747 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:33.747 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.005 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:34.572 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 025c72ea-ce4a-4627-97e2-1c3480c52af5 00:18:34.572 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:34.572 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 025C72EACE4A462797E21C3480C52AF5 -i 00:18:34.832 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fa7b6ca2-6355-4fef-a930-1e4d8c3c9ffa 00:18:34.832 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:34.832 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FA7B6CA263554FEFA9301E4D8C3C9FFA -i 00:18:35.398 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:35.658 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:36.224 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:36.224 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:36.481 nvme0n1 00:18:36.740 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:36.740 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:37.675 nvme1n2 00:18:37.675 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:37.675 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:37.675 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:37.675 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:37.675 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:37.934 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:37.934 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:37.934 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:37.934 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:38.502 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 025c72ea-ce4a-4627-97e2-1c3480c52af5 == \0\2\5\c\7\2\e\a\-\c\e\4\a\-\4\6\2\7\-\9\7\e\2\-\1\c\3\4\8\0\c\5\2\a\f\5 ]] 00:18:38.502 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:38.502 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:38.502 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fa7b6ca2-6355-4fef-a930-1e4d8c3c9ffa == \f\a\7\b\6\c\a\2\-\6\3\5\5\-\4\f\e\f\-\a\9\3\0\-\1\e\4\d\8\c\3\c\9\f\f\a ]] 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1693681 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1693681 ']' 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1693681 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693681 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693681' 00:18:39.070 killing process with pid 1693681 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1693681 00:18:39.070 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1693681 00:18:40.085 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.698 rmmod nvme_tcp 00:18:40.698 rmmod nvme_fabrics 00:18:40.698 rmmod nvme_keyring 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1691632 ']' 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1691632 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1691632 ']' 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1691632 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1691632 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1691632' 00:18:40.698 killing process with pid 1691632 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1691632 00:18:40.698 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1691632 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.269 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.175 00:18:43.175 real 0m30.758s 00:18:43.175 user 0m44.919s 00:18:43.175 sys 0m6.593s 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:43.175 ************************************ 00:18:43.175 END TEST nvmf_ns_masking 00:18:43.175 ************************************ 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.175 ************************************ 00:18:43.175 START TEST nvmf_nvme_cli 00:18:43.175 ************************************ 00:18:43.175 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.434 * Looking for test storage... 00:18:43.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.434 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:43.434 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:43.434 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:43.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.434 --rc genhtml_branch_coverage=1 00:18:43.434 --rc genhtml_function_coverage=1 00:18:43.434 --rc genhtml_legend=1 00:18:43.434 --rc geninfo_all_blocks=1 00:18:43.434 --rc geninfo_unexecuted_blocks=1 00:18:43.434 00:18:43.434 ' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:43.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.434 --rc genhtml_branch_coverage=1 00:18:43.434 --rc genhtml_function_coverage=1 00:18:43.434 --rc genhtml_legend=1 00:18:43.434 --rc geninfo_all_blocks=1 00:18:43.434 --rc geninfo_unexecuted_blocks=1 00:18:43.434 00:18:43.434 ' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:43.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.434 --rc genhtml_branch_coverage=1 00:18:43.434 --rc genhtml_function_coverage=1 00:18:43.434 --rc genhtml_legend=1 00:18:43.434 --rc geninfo_all_blocks=1 00:18:43.434 --rc geninfo_unexecuted_blocks=1 00:18:43.434 00:18:43.434 ' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:43.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.434 --rc genhtml_branch_coverage=1 00:18:43.434 --rc genhtml_function_coverage=1 00:18:43.434 --rc genhtml_legend=1 00:18:43.434 --rc geninfo_all_blocks=1 00:18:43.434 --rc geninfo_unexecuted_blocks=1 00:18:43.434 00:18:43.434 ' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.434 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.435 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:46.723 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:46.723 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:46.723 Found net devices under 0000:84:00.0: cvl_0_0 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:46.723 Found net devices under 0000:84:00.1: cvl_0_1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:18:46.723 00:18:46.723 --- 10.0.0.2 ping statistics --- 00:18:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.723 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:18:46.723 00:18:46.723 --- 10.0.0.1 ping statistics --- 00:18:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.723 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.723 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1696602 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1696602 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1696602 ']' 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.724 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 [2024-10-08 20:46:15.318438] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:46.724 [2024-10-08 20:46:15.318550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.724 [2024-10-08 20:46:15.444037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.983 [2024-10-08 20:46:15.677662] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.983 [2024-10-08 20:46:15.677785] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.983 [2024-10-08 20:46:15.677822] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.983 [2024-10-08 20:46:15.677852] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.983 [2024-10-08 20:46:15.677878] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.983 [2024-10-08 20:46:15.681616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.983 [2024-10-08 20:46:15.681768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.983 [2024-10-08 20:46:15.681720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.983 [2024-10-08 20:46:15.681772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 [2024-10-08 20:46:15.865249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 Malloc0 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 Malloc1 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 [2024-10-08 20:46:15.948307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.242 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:18:47.501 00:18:47.501 Discovery Log Number of Records 2, Generation counter 2 00:18:47.501 =====Discovery Log Entry 0====== 00:18:47.501 trtype: tcp 00:18:47.501 adrfam: ipv4 00:18:47.501 subtype: current discovery subsystem 00:18:47.501 treq: not required 00:18:47.501 portid: 0 00:18:47.501 trsvcid: 4420 00:18:47.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:47.501 traddr: 10.0.0.2 00:18:47.501 eflags: explicit discovery connections, duplicate discovery information 00:18:47.501 sectype: none 00:18:47.501 =====Discovery Log Entry 1====== 00:18:47.501 trtype: tcp 00:18:47.501 adrfam: ipv4 00:18:47.501 subtype: nvme subsystem 00:18:47.501 treq: not required 00:18:47.501 portid: 0 00:18:47.501 trsvcid: 4420 00:18:47.501 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:47.501 traddr: 10.0.0.2 00:18:47.501 eflags: none 00:18:47.501 sectype: none 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:47.501 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:48.069 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:49.971 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:49.971 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:49.971 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:50.229 /dev/nvme0n2 ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.229 rmmod nvme_tcp 00:18:50.229 rmmod nvme_fabrics 00:18:50.229 rmmod nvme_keyring 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1696602 ']' 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1696602 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1696602 ']' 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1696602 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.229 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1696602 00:18:50.230 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.230 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.230 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1696602' 00:18:50.230 killing process with pid 1696602 00:18:50.230 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1696602 00:18:50.230 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1696602 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.796 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.338 00:18:53.338 real 0m9.584s 00:18:53.338 user 0m15.468s 00:18:53.338 sys 0m3.239s 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.338 ************************************ 00:18:53.338 END TEST nvmf_nvme_cli 00:18:53.338 ************************************ 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.338 ************************************ 00:18:53.338 START TEST nvmf_vfio_user 00:18:53.338 ************************************ 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:53.338 * Looking for test storage... 00:18:53.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.338 --rc genhtml_branch_coverage=1 00:18:53.338 --rc genhtml_function_coverage=1 00:18:53.338 --rc genhtml_legend=1 00:18:53.338 --rc geninfo_all_blocks=1 00:18:53.338 --rc geninfo_unexecuted_blocks=1 00:18:53.338 00:18:53.338 ' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.338 --rc genhtml_branch_coverage=1 00:18:53.338 --rc genhtml_function_coverage=1 00:18:53.338 --rc genhtml_legend=1 00:18:53.338 --rc geninfo_all_blocks=1 00:18:53.338 --rc geninfo_unexecuted_blocks=1 00:18:53.338 00:18:53.338 ' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.338 --rc genhtml_branch_coverage=1 00:18:53.338 --rc genhtml_function_coverage=1 00:18:53.338 --rc genhtml_legend=1 00:18:53.338 --rc geninfo_all_blocks=1 00:18:53.338 --rc geninfo_unexecuted_blocks=1 00:18:53.338 00:18:53.338 ' 00:18:53.338 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.338 --rc genhtml_branch_coverage=1 00:18:53.338 --rc genhtml_function_coverage=1 00:18:53.338 --rc genhtml_legend=1 00:18:53.338 --rc geninfo_all_blocks=1 00:18:53.338 --rc geninfo_unexecuted_blocks=1 00:18:53.338 00:18:53.338 ' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1697528 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1697528' 00:18:53.339 Process pid: 1697528 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1697528 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1697528 ']' 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.339 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:53.339 [2024-10-08 20:46:21.995337] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:53.339 [2024-10-08 20:46:21.995513] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.598 [2024-10-08 20:46:22.145697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.856 [2024-10-08 20:46:22.364969] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.856 [2024-10-08 20:46:22.365079] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.856 [2024-10-08 20:46:22.365115] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.856 [2024-10-08 20:46:22.365146] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.856 [2024-10-08 20:46:22.365174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.856 [2024-10-08 20:46:22.368432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.856 [2024-10-08 20:46:22.368511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.856 [2024-10-08 20:46:22.368593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.856 [2024-10-08 20:46:22.368597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.856 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.856 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:53.856 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:54.792 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:55.360 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:55.360 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:55.360 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:55.360 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:55.360 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:55.618 Malloc1 00:18:55.877 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:56.136 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:56.394 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:56.652 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:56.652 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:56.652 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:57.219 Malloc2 00:18:57.219 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:57.477 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:58.052 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:58.632 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:58.632 [2024-10-08 20:46:27.296570] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:58.632 [2024-10-08 20:46:27.296623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698198 ] 00:18:58.632 [2024-10-08 20:46:27.334980] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:58.632 [2024-10-08 20:46:27.341151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:58.632 [2024-10-08 20:46:27.341184] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4a9b230000 00:18:58.632 [2024-10-08 20:46:27.342144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.343135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.344139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.345143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.346150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.347150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.348157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.349164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.632 [2024-10-08 20:46:27.350169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:58.632 [2024-10-08 20:46:27.350190] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4a9b225000 00:18:58.632 [2024-10-08 20:46:27.351305] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:58.632 [2024-10-08 20:46:27.365254] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:58.632 [2024-10-08 20:46:27.365297] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:58.632 [2024-10-08 20:46:27.374319] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:58.632 [2024-10-08 20:46:27.374372] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:58.632 [2024-10-08 20:46:27.374458] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:58.632 [2024-10-08 20:46:27.374489] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:58.632 [2024-10-08 20:46:27.374499] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:58.632 [2024-10-08 20:46:27.375307] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:58.632 [2024-10-08 20:46:27.375327] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:58.632 [2024-10-08 20:46:27.375339] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:58.632 [2024-10-08 20:46:27.376312] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:58.632 [2024-10-08 20:46:27.376329] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:58.632 [2024-10-08 20:46:27.376343] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.377321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:58.632 [2024-10-08 20:46:27.377339] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.378325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:58.632 [2024-10-08 20:46:27.378344] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:58.632 [2024-10-08 20:46:27.378353] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.378364] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.378478] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:58.632 [2024-10-08 20:46:27.378486] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.378495] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:58.632 [2024-10-08 20:46:27.379339] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:58.632 [2024-10-08 20:46:27.380334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:58.632 [2024-10-08 20:46:27.381342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:58.632 [2024-10-08 20:46:27.382340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.632 [2024-10-08 20:46:27.382437] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:58.632 [2024-10-08 20:46:27.383354] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:58.632 [2024-10-08 20:46:27.383372] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:58.632 [2024-10-08 20:46:27.383381] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:58.632 [2024-10-08 20:46:27.383404] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:58.632 [2024-10-08 20:46:27.383422] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:58.632 [2024-10-08 20:46:27.383446] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.632 [2024-10-08 20:46:27.383455] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.632 [2024-10-08 20:46:27.383462] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.632 [2024-10-08 20:46:27.383481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.632 [2024-10-08 20:46:27.383548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:58.632 [2024-10-08 20:46:27.383564] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:58.632 [2024-10-08 20:46:27.383572] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:58.632 [2024-10-08 20:46:27.383579] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:58.632 [2024-10-08 20:46:27.383586] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:58.632 [2024-10-08 20:46:27.383594] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:58.632 [2024-10-08 20:46:27.383601] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:58.632 [2024-10-08 20:46:27.383608] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:58.632 [2024-10-08 20:46:27.383624] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:58.632 [2024-10-08 20:46:27.383644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:58.632 [2024-10-08 20:46:27.383688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:58.632 [2024-10-08 20:46:27.383707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.632 [2024-10-08 20:46:27.383720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.632 [2024-10-08 20:46:27.383732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.632 [2024-10-08 20:46:27.383744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.632 [2024-10-08 20:46:27.383752] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383767] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.383795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.383805] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:58.633 [2024-10-08 20:46:27.383813] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383823] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383839] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.383870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.383935] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383950] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.383964] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:58.633 [2024-10-08 20:46:27.383988] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:58.633 [2024-10-08 20:46:27.383994] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384036] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:58.633 [2024-10-08 20:46:27.384051] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384068] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384081] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.633 [2024-10-08 20:46:27.384089] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.633 [2024-10-08 20:46:27.384095] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384158] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384172] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384184] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.633 [2024-10-08 20:46:27.384192] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.633 [2024-10-08 20:46:27.384198] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384236] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384247] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384261] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384271] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384279] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384287] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384295] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:58.633 [2024-10-08 20:46:27.384302] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:58.633 [2024-10-08 20:46:27.384310] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:58.633 [2024-10-08 20:46:27.384334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384457] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:58.633 [2024-10-08 20:46:27.384467] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:58.633 [2024-10-08 20:46:27.384473] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:58.633 [2024-10-08 20:46:27.384479] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:58.633 [2024-10-08 20:46:27.384485] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:58.633 [2024-10-08 20:46:27.384494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:58.633 [2024-10-08 20:46:27.384506] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:58.633 [2024-10-08 20:46:27.384514] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:58.633 [2024-10-08 20:46:27.384520] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384539] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:58.633 [2024-10-08 20:46:27.384547] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.633 [2024-10-08 20:46:27.384553] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384574] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:58.633 [2024-10-08 20:46:27.384582] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:58.633 [2024-10-08 20:46:27.384588] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.633 [2024-10-08 20:46:27.384596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:58.633 [2024-10-08 20:46:27.384608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:58.633 [2024-10-08 20:46:27.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:58.633 ===================================================== 00:18:58.633 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:58.633 ===================================================== 00:18:58.633 Controller Capabilities/Features 00:18:58.633 ================================ 00:18:58.633 Vendor ID: 4e58 00:18:58.633 Subsystem Vendor ID: 4e58 00:18:58.633 Serial Number: SPDK1 00:18:58.633 Model Number: SPDK bdev Controller 00:18:58.633 Firmware Version: 25.01 00:18:58.633 Recommended Arb Burst: 6 00:18:58.633 IEEE OUI Identifier: 8d 6b 50 00:18:58.633 Multi-path I/O 00:18:58.633 May have multiple subsystem ports: Yes 00:18:58.633 May have multiple controllers: Yes 00:18:58.633 Associated with SR-IOV VF: No 00:18:58.633 Max Data Transfer Size: 131072 00:18:58.634 Max Number of Namespaces: 32 00:18:58.634 Max Number of I/O Queues: 127 00:18:58.634 NVMe Specification Version (VS): 1.3 00:18:58.634 NVMe Specification Version (Identify): 1.3 00:18:58.634 Maximum Queue Entries: 256 00:18:58.634 Contiguous Queues Required: Yes 00:18:58.634 Arbitration Mechanisms Supported 00:18:58.634 Weighted Round Robin: Not Supported 00:18:58.634 Vendor Specific: Not Supported 00:18:58.634 Reset Timeout: 15000 ms 00:18:58.634 Doorbell Stride: 4 bytes 00:18:58.634 NVM Subsystem Reset: Not Supported 00:18:58.634 Command Sets Supported 00:18:58.634 NVM Command Set: Supported 00:18:58.634 Boot Partition: Not Supported 00:18:58.634 Memory Page Size Minimum: 4096 bytes 00:18:58.634 Memory Page Size Maximum: 4096 bytes 00:18:58.634 Persistent Memory Region: Not Supported 00:18:58.634 Optional Asynchronous Events Supported 00:18:58.634 Namespace Attribute Notices: Supported 00:18:58.634 Firmware Activation Notices: Not Supported 00:18:58.634 ANA Change Notices: Not Supported 00:18:58.634 PLE Aggregate Log Change Notices: Not Supported 00:18:58.634 LBA Status Info Alert Notices: Not Supported 00:18:58.634 EGE Aggregate Log Change Notices: Not Supported 00:18:58.634 Normal NVM Subsystem Shutdown event: Not Supported 00:18:58.634 Zone Descriptor Change Notices: Not Supported 00:18:58.634 Discovery Log Change Notices: Not Supported 00:18:58.634 Controller Attributes 00:18:58.634 128-bit Host Identifier: Supported 00:18:58.634 Non-Operational Permissive Mode: Not Supported 00:18:58.634 NVM Sets: Not Supported 00:18:58.634 Read Recovery Levels: Not Supported 00:18:58.634 Endurance Groups: Not Supported 00:18:58.634 Predictable Latency Mode: Not Supported 00:18:58.634 Traffic Based Keep ALive: Not Supported 00:18:58.634 Namespace Granularity: Not Supported 00:18:58.634 SQ Associations: Not Supported 00:18:58.634 UUID List: Not Supported 00:18:58.634 Multi-Domain Subsystem: Not Supported 00:18:58.634 Fixed Capacity Management: Not Supported 00:18:58.634 Variable Capacity Management: Not Supported 00:18:58.634 Delete Endurance Group: Not Supported 00:18:58.634 Delete NVM Set: Not Supported 00:18:58.634 Extended LBA Formats Supported: Not Supported 00:18:58.634 Flexible Data Placement Supported: Not Supported 00:18:58.634 00:18:58.634 Controller Memory Buffer Support 00:18:58.634 ================================ 00:18:58.634 Supported: No 00:18:58.634 00:18:58.634 Persistent Memory Region Support 00:18:58.634 ================================ 00:18:58.634 Supported: No 00:18:58.634 00:18:58.634 Admin Command Set Attributes 00:18:58.634 ============================ 00:18:58.634 Security Send/Receive: Not Supported 00:18:58.634 Format NVM: Not Supported 00:18:58.634 Firmware Activate/Download: Not Supported 00:18:58.634 Namespace Management: Not Supported 00:18:58.634 Device Self-Test: Not Supported 00:18:58.634 Directives: Not Supported 00:18:58.634 NVMe-MI: Not Supported 00:18:58.634 Virtualization Management: Not Supported 00:18:58.634 Doorbell Buffer Config: Not Supported 00:18:58.634 Get LBA Status Capability: Not Supported 00:18:58.634 Command & Feature Lockdown Capability: Not Supported 00:18:58.634 Abort Command Limit: 4 00:18:58.634 Async Event Request Limit: 4 00:18:58.634 Number of Firmware Slots: N/A 00:18:58.634 Firmware Slot 1 Read-Only: N/A 00:18:58.634 Firmware Activation Without Reset: N/A 00:18:58.634 Multiple Update Detection Support: N/A 00:18:58.634 Firmware Update Granularity: No Information Provided 00:18:58.634 Per-Namespace SMART Log: No 00:18:58.634 Asymmetric Namespace Access Log Page: Not Supported 00:18:58.634 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:58.634 Command Effects Log Page: Supported 00:18:58.634 Get Log Page Extended Data: Supported 00:18:58.634 Telemetry Log Pages: Not Supported 00:18:58.634 Persistent Event Log Pages: Not Supported 00:18:58.634 Supported Log Pages Log Page: May Support 00:18:58.634 Commands Supported & Effects Log Page: Not Supported 00:18:58.634 Feature Identifiers & Effects Log Page:May Support 00:18:58.634 NVMe-MI Commands & Effects Log Page: May Support 00:18:58.634 Data Area 4 for Telemetry Log: Not Supported 00:18:58.634 Error Log Page Entries Supported: 128 00:18:58.634 Keep Alive: Supported 00:18:58.634 Keep Alive Granularity: 10000 ms 00:18:58.634 00:18:58.634 NVM Command Set Attributes 00:18:58.634 ========================== 00:18:58.634 Submission Queue Entry Size 00:18:58.634 Max: 64 00:18:58.634 Min: 64 00:18:58.634 Completion Queue Entry Size 00:18:58.634 Max: 16 00:18:58.634 Min: 16 00:18:58.634 Number of Namespaces: 32 00:18:58.634 Compare Command: Supported 00:18:58.634 Write Uncorrectable Command: Not Supported 00:18:58.634 Dataset Management Command: Supported 00:18:58.634 Write Zeroes Command: Supported 00:18:58.634 Set Features Save Field: Not Supported 00:18:58.634 Reservations: Not Supported 00:18:58.634 Timestamp: Not Supported 00:18:58.634 Copy: Supported 00:18:58.634 Volatile Write Cache: Present 00:18:58.634 Atomic Write Unit (Normal): 1 00:18:58.634 Atomic Write Unit (PFail): 1 00:18:58.634 Atomic Compare & Write Unit: 1 00:18:58.634 Fused Compare & Write: Supported 00:18:58.634 Scatter-Gather List 00:18:58.634 SGL Command Set: Supported (Dword aligned) 00:18:58.634 SGL Keyed: Not Supported 00:18:58.634 SGL Bit Bucket Descriptor: Not Supported 00:18:58.634 SGL Metadata Pointer: Not Supported 00:18:58.634 Oversized SGL: Not Supported 00:18:58.634 SGL Metadata Address: Not Supported 00:18:58.634 SGL Offset: Not Supported 00:18:58.634 Transport SGL Data Block: Not Supported 00:18:58.634 Replay Protected Memory Block: Not Supported 00:18:58.634 00:18:58.634 Firmware Slot Information 00:18:58.634 ========================= 00:18:58.634 Active slot: 1 00:18:58.634 Slot 1 Firmware Revision: 25.01 00:18:58.634 00:18:58.634 00:18:58.634 Commands Supported and Effects 00:18:58.634 ============================== 00:18:58.634 Admin Commands 00:18:58.634 -------------- 00:18:58.634 Get Log Page (02h): Supported 00:18:58.634 Identify (06h): Supported 00:18:58.634 Abort (08h): Supported 00:18:58.634 Set Features (09h): Supported 00:18:58.634 Get Features (0Ah): Supported 00:18:58.634 Asynchronous Event Request (0Ch): Supported 00:18:58.634 Keep Alive (18h): Supported 00:18:58.634 I/O Commands 00:18:58.634 ------------ 00:18:58.634 Flush (00h): Supported LBA-Change 00:18:58.634 Write (01h): Supported LBA-Change 00:18:58.634 Read (02h): Supported 00:18:58.634 Compare (05h): Supported 00:18:58.634 Write Zeroes (08h): Supported LBA-Change 00:18:58.634 Dataset Management (09h): Supported LBA-Change 00:18:58.634 Copy (19h): Supported LBA-Change 00:18:58.634 00:18:58.634 Error Log 00:18:58.634 ========= 00:18:58.634 00:18:58.634 Arbitration 00:18:58.634 =========== 00:18:58.634 Arbitration Burst: 1 00:18:58.634 00:18:58.634 Power Management 00:18:58.634 ================ 00:18:58.634 Number of Power States: 1 00:18:58.634 Current Power State: Power State #0 00:18:58.634 Power State #0: 00:18:58.634 Max Power: 0.00 W 00:18:58.634 Non-Operational State: Operational 00:18:58.634 Entry Latency: Not Reported 00:18:58.634 Exit Latency: Not Reported 00:18:58.634 Relative Read Throughput: 0 00:18:58.634 Relative Read Latency: 0 00:18:58.634 Relative Write Throughput: 0 00:18:58.634 Relative Write Latency: 0 00:18:58.634 Idle Power: Not Reported 00:18:58.634 Active Power: Not Reported 00:18:58.634 Non-Operational Permissive Mode: Not Supported 00:18:58.634 00:18:58.634 Health Information 00:18:58.634 ================== 00:18:58.634 Critical Warnings: 00:18:58.634 Available Spare Space: OK 00:18:58.634 Temperature: OK 00:18:58.635 Device Reliability: OK 00:18:58.635 Read Only: No 00:18:58.635 Volatile Memory Backup: OK 00:18:58.635 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:58.635 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:58.635 Available Spare: 0% 00:18:58.635 Available Sp[2024-10-08 20:46:27.384803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:58.635 [2024-10-08 20:46:27.384820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:58.635 [2024-10-08 20:46:27.384864] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:58.635 [2024-10-08 20:46:27.384881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.635 [2024-10-08 20:46:27.384892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.635 [2024-10-08 20:46:27.384902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.635 [2024-10-08 20:46:27.384911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.635 [2024-10-08 20:46:27.385362] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:58.635 [2024-10-08 20:46:27.385382] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:58.635 [2024-10-08 20:46:27.386367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.635 [2024-10-08 20:46:27.386451] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:58.635 [2024-10-08 20:46:27.386466] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:58.635 [2024-10-08 20:46:27.387390] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:58.635 [2024-10-08 20:46:27.387412] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:58.635 [2024-10-08 20:46:27.387470] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:58.635 [2024-10-08 20:46:27.389417] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:58.896 are Threshold: 0% 00:18:58.896 Life Percentage Used: 0% 00:18:58.896 Data Units Read: 0 00:18:58.896 Data Units Written: 0 00:18:58.896 Host Read Commands: 0 00:18:58.896 Host Write Commands: 0 00:18:58.896 Controller Busy Time: 0 minutes 00:18:58.896 Power Cycles: 0 00:18:58.896 Power On Hours: 0 hours 00:18:58.896 Unsafe Shutdowns: 0 00:18:58.896 Unrecoverable Media Errors: 0 00:18:58.896 Lifetime Error Log Entries: 0 00:18:58.896 Warning Temperature Time: 0 minutes 00:18:58.896 Critical Temperature Time: 0 minutes 00:18:58.896 00:18:58.896 Number of Queues 00:18:58.896 ================ 00:18:58.896 Number of I/O Submission Queues: 127 00:18:58.896 Number of I/O Completion Queues: 127 00:18:58.896 00:18:58.896 Active Namespaces 00:18:58.896 ================= 00:18:58.896 Namespace ID:1 00:18:58.896 Error Recovery Timeout: Unlimited 00:18:58.896 Command Set Identifier: NVM (00h) 00:18:58.896 Deallocate: Supported 00:18:58.896 Deallocated/Unwritten Error: Not Supported 00:18:58.896 Deallocated Read Value: Unknown 00:18:58.896 Deallocate in Write Zeroes: Not Supported 00:18:58.896 Deallocated Guard Field: 0xFFFF 00:18:58.896 Flush: Supported 00:18:58.896 Reservation: Supported 00:18:58.896 Namespace Sharing Capabilities: Multiple Controllers 00:18:58.896 Size (in LBAs): 131072 (0GiB) 00:18:58.896 Capacity (in LBAs): 131072 (0GiB) 00:18:58.896 Utilization (in LBAs): 131072 (0GiB) 00:18:58.896 NGUID: 7FE534E809334FBF96710DB60A68D6B7 00:18:58.896 UUID: 7fe534e8-0933-4fbf-9671-0db60a68d6b7 00:18:58.896 Thin Provisioning: Not Supported 00:18:58.896 Per-NS Atomic Units: Yes 00:18:58.896 Atomic Boundary Size (Normal): 0 00:18:58.896 Atomic Boundary Size (PFail): 0 00:18:58.896 Atomic Boundary Offset: 0 00:18:58.896 Maximum Single Source Range Length: 65535 00:18:58.896 Maximum Copy Length: 65535 00:18:58.896 Maximum Source Range Count: 1 00:18:58.896 NGUID/EUI64 Never Reused: No 00:18:58.896 Namespace Write Protected: No 00:18:58.896 Number of LBA Formats: 1 00:18:58.896 Current LBA Format: LBA Format #00 00:18:58.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:58.896 00:18:58.896 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:58.896 [2024-10-08 20:46:27.635821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:04.188 Initializing NVMe Controllers 00:19:04.188 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:04.188 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:04.188 Initialization complete. Launching workers. 00:19:04.188 ======================================================== 00:19:04.188 Latency(us) 00:19:04.188 Device Information : IOPS MiB/s Average min max 00:19:04.188 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30909.66 120.74 4140.52 1236.49 11444.39 00:19:04.188 ======================================================== 00:19:04.188 Total : 30909.66 120.74 4140.52 1236.49 11444.39 00:19:04.188 00:19:04.188 [2024-10-08 20:46:32.655504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:04.188 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:04.188 [2024-10-08 20:46:32.911797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.460 Initializing NVMe Controllers 00:19:09.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:09.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:09.460 Initialization complete. Launching workers. 00:19:09.460 ======================================================== 00:19:09.460 Latency(us) 00:19:09.460 Device Information : IOPS MiB/s Average min max 00:19:09.460 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.78 6983.73 11030.22 00:19:09.460 ======================================================== 00:19:09.460 Total : 16051.20 62.70 7982.78 6983.73 11030.22 00:19:09.460 00:19:09.460 [2024-10-08 20:46:37.949739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.460 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:09.719 [2024-10-08 20:46:38.223013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:14.992 [2024-10-08 20:46:43.293021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:14.992 Initializing NVMe Controllers 00:19:14.992 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:14.992 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:14.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:14.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:14.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:14.992 Initialization complete. Launching workers. 00:19:14.992 Starting thread on core 2 00:19:14.992 Starting thread on core 3 00:19:14.992 Starting thread on core 1 00:19:14.992 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:14.992 [2024-10-08 20:46:43.652105] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:18.280 [2024-10-08 20:46:46.951577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:18.280 Initializing NVMe Controllers 00:19:18.280 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:18.280 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:18.280 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:18.280 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:18.280 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:18.280 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:18.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:18.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:18.280 Initialization complete. Launching workers. 00:19:18.280 Starting thread on core 1 with urgent priority queue 00:19:18.280 Starting thread on core 2 with urgent priority queue 00:19:18.280 Starting thread on core 3 with urgent priority queue 00:19:18.280 Starting thread on core 0 with urgent priority queue 00:19:18.280 SPDK bdev Controller (SPDK1 ) core 0: 2472.67 IO/s 40.44 secs/100000 ios 00:19:18.280 SPDK bdev Controller (SPDK1 ) core 1: 2412.67 IO/s 41.45 secs/100000 ios 00:19:18.280 SPDK bdev Controller (SPDK1 ) core 2: 2489.67 IO/s 40.17 secs/100000 ios 00:19:18.280 SPDK bdev Controller (SPDK1 ) core 3: 2441.33 IO/s 40.96 secs/100000 ios 00:19:18.280 ======================================================== 00:19:18.280 00:19:18.280 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:18.539 [2024-10-08 20:46:47.266195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:18.539 Initializing NVMe Controllers 00:19:18.539 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:18.539 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:18.539 Namespace ID: 1 size: 0GB 00:19:18.539 Initialization complete. 00:19:18.539 INFO: using host memory buffer for IO 00:19:18.539 Hello world! 00:19:18.539 [2024-10-08 20:46:47.300808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:18.798 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:19.058 [2024-10-08 20:46:47.687090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:19.994 Initializing NVMe Controllers 00:19:19.995 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:19.995 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:19.995 Initialization complete. Launching workers. 00:19:19.995 submit (in ns) avg, min, max = 8398.9, 3523.3, 4022275.6 00:19:19.995 complete (in ns) avg, min, max = 27618.7, 2062.2, 4017190.0 00:19:19.995 00:19:19.995 Submit histogram 00:19:19.995 ================ 00:19:19.995 Range in us Cumulative Count 00:19:19.995 3.508 - 3.532: 0.0237% ( 3) 00:19:19.995 3.532 - 3.556: 0.3556% ( 42) 00:19:19.995 3.556 - 3.579: 1.0669% ( 90) 00:19:19.995 3.579 - 3.603: 4.6705% ( 456) 00:19:19.995 3.603 - 3.627: 9.6096% ( 625) 00:19:19.995 3.627 - 3.650: 19.0058% ( 1189) 00:19:19.995 3.650 - 3.674: 28.1650% ( 1159) 00:19:19.995 3.674 - 3.698: 38.9442% ( 1364) 00:19:19.995 3.698 - 3.721: 47.4079% ( 1071) 00:19:19.995 3.721 - 3.745: 53.2164% ( 735) 00:19:19.995 3.745 - 3.769: 57.3811% ( 527) 00:19:19.995 3.769 - 3.793: 61.1980% ( 483) 00:19:19.995 3.793 - 3.816: 64.3986% ( 405) 00:19:19.995 3.816 - 3.840: 67.6308% ( 409) 00:19:19.995 3.840 - 3.864: 71.0763% ( 436) 00:19:19.995 3.864 - 3.887: 75.0277% ( 500) 00:19:19.995 3.887 - 3.911: 79.2477% ( 534) 00:19:19.995 3.911 - 3.935: 83.0884% ( 486) 00:19:19.995 3.935 - 3.959: 85.7990% ( 343) 00:19:19.995 3.959 - 3.982: 87.5612% ( 223) 00:19:19.995 3.982 - 4.006: 88.9284% ( 173) 00:19:19.995 4.006 - 4.030: 90.1612% ( 156) 00:19:19.995 4.030 - 4.053: 91.1253% ( 122) 00:19:19.995 4.053 - 4.077: 91.8366% ( 90) 00:19:19.995 4.077 - 4.101: 92.7059% ( 110) 00:19:19.995 4.101 - 4.124: 93.4408% ( 93) 00:19:19.995 4.124 - 4.148: 93.9466% ( 64) 00:19:19.995 4.148 - 4.172: 94.3812% ( 55) 00:19:19.995 4.172 - 4.196: 94.8475% ( 59) 00:19:19.995 4.196 - 4.219: 95.2505% ( 51) 00:19:19.995 4.219 - 4.243: 95.5982% ( 44) 00:19:19.995 4.243 - 4.267: 95.8353% ( 30) 00:19:19.995 4.267 - 4.290: 96.0408% ( 26) 00:19:19.995 4.290 - 4.314: 96.2858% ( 31) 00:19:19.995 4.314 - 4.338: 96.4754% ( 24) 00:19:19.995 4.338 - 4.361: 96.6256% ( 19) 00:19:19.995 4.361 - 4.385: 96.7283% ( 13) 00:19:19.995 4.385 - 4.409: 96.7836% ( 7) 00:19:19.995 4.409 - 4.433: 96.8864% ( 13) 00:19:19.995 4.433 - 4.456: 96.9812% ( 12) 00:19:19.995 4.456 - 4.480: 97.0286% ( 6) 00:19:19.995 4.480 - 4.504: 97.0918% ( 8) 00:19:19.995 4.504 - 4.527: 97.1234% ( 4) 00:19:19.995 4.527 - 4.551: 97.1630% ( 5) 00:19:19.995 4.551 - 4.575: 97.1867% ( 3) 00:19:19.995 4.575 - 4.599: 97.2104% ( 3) 00:19:19.995 4.599 - 4.622: 97.2341% ( 3) 00:19:19.995 4.622 - 4.646: 97.2420% ( 1) 00:19:19.995 4.646 - 4.670: 97.2578% ( 2) 00:19:19.995 4.693 - 4.717: 97.2657% ( 1) 00:19:19.995 4.717 - 4.741: 97.2736% ( 1) 00:19:19.995 4.741 - 4.764: 97.2894% ( 2) 00:19:19.995 4.764 - 4.788: 97.3368% ( 6) 00:19:19.995 4.788 - 4.812: 97.3526% ( 2) 00:19:19.995 4.812 - 4.836: 97.3842% ( 4) 00:19:19.995 4.836 - 4.859: 97.4237% ( 5) 00:19:19.995 4.859 - 4.883: 97.4949% ( 9) 00:19:19.995 4.883 - 4.907: 97.5344% ( 5) 00:19:19.995 4.907 - 4.930: 97.5502% ( 2) 00:19:19.995 4.930 - 4.954: 97.6055% ( 7) 00:19:19.995 4.954 - 4.978: 97.6450% ( 5) 00:19:19.995 4.978 - 5.001: 97.7003% ( 7) 00:19:19.995 5.001 - 5.025: 97.7715% ( 9) 00:19:19.995 5.025 - 5.049: 97.8268% ( 7) 00:19:19.995 5.049 - 5.073: 97.8505% ( 3) 00:19:19.995 5.073 - 5.096: 97.8584% ( 1) 00:19:19.995 5.096 - 5.120: 97.8821% ( 3) 00:19:19.995 5.120 - 5.144: 97.9058% ( 3) 00:19:19.995 5.144 - 5.167: 97.9216% ( 2) 00:19:19.995 5.167 - 5.191: 97.9927% ( 9) 00:19:19.995 5.191 - 5.215: 98.0401% ( 6) 00:19:19.995 5.215 - 5.239: 98.0718% ( 4) 00:19:19.995 5.239 - 5.262: 98.1034% ( 4) 00:19:19.995 5.262 - 5.286: 98.1271% ( 3) 00:19:19.995 5.310 - 5.333: 98.1429% ( 2) 00:19:19.995 5.333 - 5.357: 98.1508% ( 1) 00:19:19.995 5.381 - 5.404: 98.1666% ( 2) 00:19:19.995 5.404 - 5.428: 98.1903% ( 3) 00:19:19.995 5.428 - 5.452: 98.2061% ( 2) 00:19:19.995 5.452 - 5.476: 98.2219% ( 2) 00:19:19.995 5.476 - 5.499: 98.2298% ( 1) 00:19:19.995 5.523 - 5.547: 98.2456% ( 2) 00:19:19.995 5.547 - 5.570: 98.2535% ( 1) 00:19:19.995 5.618 - 5.641: 98.2614% ( 1) 00:19:19.995 5.760 - 5.784: 98.2693% ( 1) 00:19:19.995 5.997 - 6.021: 98.2772% ( 1) 00:19:19.995 6.068 - 6.116: 98.2851% ( 1) 00:19:19.995 6.163 - 6.210: 98.2930% ( 1) 00:19:19.995 6.210 - 6.258: 98.3009% ( 1) 00:19:19.995 6.305 - 6.353: 98.3088% ( 1) 00:19:19.995 6.353 - 6.400: 98.3325% ( 3) 00:19:19.995 6.447 - 6.495: 98.3404% ( 1) 00:19:19.995 6.495 - 6.542: 98.3483% ( 1) 00:19:19.995 6.590 - 6.637: 98.3563% ( 1) 00:19:19.995 6.779 - 6.827: 98.3642% ( 1) 00:19:19.995 6.827 - 6.874: 98.3721% ( 1) 00:19:19.995 7.111 - 7.159: 98.3800% ( 1) 00:19:19.995 7.206 - 7.253: 98.3879% ( 1) 00:19:19.995 7.253 - 7.301: 98.3958% ( 1) 00:19:19.995 7.348 - 7.396: 98.4116% ( 2) 00:19:19.995 7.490 - 7.538: 98.4195% ( 1) 00:19:19.995 7.633 - 7.680: 98.4274% ( 1) 00:19:19.995 7.775 - 7.822: 98.4432% ( 2) 00:19:19.995 7.822 - 7.870: 98.4511% ( 1) 00:19:19.995 8.012 - 8.059: 98.4590% ( 1) 00:19:19.995 8.344 - 8.391: 98.4669% ( 1) 00:19:19.995 8.439 - 8.486: 98.4748% ( 1) 00:19:19.995 8.533 - 8.581: 98.4827% ( 1) 00:19:19.995 8.581 - 8.628: 98.4906% ( 1) 00:19:19.995 8.676 - 8.723: 98.5143% ( 3) 00:19:19.995 8.770 - 8.818: 98.5222% ( 1) 00:19:19.995 8.865 - 8.913: 98.5301% ( 1) 00:19:19.995 8.913 - 8.960: 98.5380% ( 1) 00:19:19.995 8.960 - 9.007: 98.5538% ( 2) 00:19:19.995 9.007 - 9.055: 98.5617% ( 1) 00:19:19.995 9.102 - 9.150: 98.5696% ( 1) 00:19:19.995 9.150 - 9.197: 98.5854% ( 2) 00:19:19.995 9.197 - 9.244: 98.5933% ( 1) 00:19:19.995 9.292 - 9.339: 98.6091% ( 2) 00:19:19.995 9.387 - 9.434: 98.6249% ( 2) 00:19:19.995 9.481 - 9.529: 98.6407% ( 2) 00:19:19.995 9.529 - 9.576: 98.6486% ( 1) 00:19:19.995 9.624 - 9.671: 98.6566% ( 1) 00:19:19.995 9.719 - 9.766: 98.6645% ( 1) 00:19:19.995 9.813 - 9.861: 98.6724% ( 1) 00:19:19.995 9.861 - 9.908: 98.6803% ( 1) 00:19:19.995 9.956 - 10.003: 98.6882% ( 1) 00:19:19.995 10.003 - 10.050: 98.6961% ( 1) 00:19:19.995 10.050 - 10.098: 98.7040% ( 1) 00:19:19.995 10.145 - 10.193: 98.7119% ( 1) 00:19:19.995 10.193 - 10.240: 98.7198% ( 1) 00:19:19.995 10.430 - 10.477: 98.7277% ( 1) 00:19:19.995 10.477 - 10.524: 98.7356% ( 1) 00:19:19.995 10.809 - 10.856: 98.7435% ( 1) 00:19:19.995 10.904 - 10.951: 98.7514% ( 1) 00:19:19.995 11.046 - 11.093: 98.7593% ( 1) 00:19:19.995 11.236 - 11.283: 98.7672% ( 1) 00:19:19.995 11.330 - 11.378: 98.7751% ( 1) 00:19:19.995 11.425 - 11.473: 98.7830% ( 1) 00:19:19.995 11.710 - 11.757: 98.7909% ( 1) 00:19:19.995 11.804 - 11.852: 98.7988% ( 1) 00:19:19.995 11.852 - 11.899: 98.8067% ( 1) 00:19:19.995 11.899 - 11.947: 98.8146% ( 1) 00:19:19.995 11.947 - 11.994: 98.8225% ( 1) 00:19:19.995 12.136 - 12.231: 98.8383% ( 2) 00:19:19.995 12.516 - 12.610: 98.8541% ( 2) 00:19:19.995 12.705 - 12.800: 98.8699% ( 2) 00:19:19.995 12.990 - 13.084: 98.8778% ( 1) 00:19:19.995 13.179 - 13.274: 98.8857% ( 1) 00:19:19.995 13.464 - 13.559: 98.9015% ( 2) 00:19:19.995 13.748 - 13.843: 98.9173% ( 2) 00:19:19.995 13.938 - 14.033: 98.9331% ( 2) 00:19:19.995 14.033 - 14.127: 98.9410% ( 1) 00:19:19.995 14.601 - 14.696: 98.9489% ( 1) 00:19:19.995 14.791 - 14.886: 98.9569% ( 1) 00:19:19.995 15.076 - 15.170: 98.9648% ( 1) 00:19:19.995 15.360 - 15.455: 98.9727% ( 1) 00:19:19.995 17.067 - 17.161: 98.9806% ( 1) 00:19:19.995 17.161 - 17.256: 98.9964% ( 2) 00:19:19.995 17.256 - 17.351: 99.0043% ( 1) 00:19:19.995 17.351 - 17.446: 99.0280% ( 3) 00:19:19.995 17.446 - 17.541: 99.0596% ( 4) 00:19:19.995 17.636 - 17.730: 99.0991% ( 5) 00:19:19.995 17.730 - 17.825: 99.1307% ( 4) 00:19:19.995 17.825 - 17.920: 99.1781% ( 6) 00:19:19.995 17.920 - 18.015: 99.2334% ( 7) 00:19:19.995 18.015 - 18.110: 99.2730% ( 5) 00:19:19.995 18.110 - 18.204: 99.3520% ( 10) 00:19:19.995 18.204 - 18.299: 99.4073% ( 7) 00:19:19.995 18.299 - 18.394: 99.4942% ( 11) 00:19:19.995 18.394 - 18.489: 99.5495% ( 7) 00:19:19.995 18.489 - 18.584: 99.6286% ( 10) 00:19:19.995 18.584 - 18.679: 99.6839% ( 7) 00:19:19.995 18.679 - 18.773: 99.7313% ( 6) 00:19:19.995 18.773 - 18.868: 99.7550% ( 3) 00:19:19.995 18.868 - 18.963: 99.7787% ( 3) 00:19:19.995 19.058 - 19.153: 99.7866% ( 1) 00:19:19.995 19.153 - 19.247: 99.7945% ( 1) 00:19:19.995 19.247 - 19.342: 99.8024% ( 1) 00:19:19.995 19.342 - 19.437: 99.8182% ( 2) 00:19:19.995 19.437 - 19.532: 99.8340% ( 2) 00:19:19.995 19.911 - 20.006: 99.8419% ( 1) 00:19:19.995 20.006 - 20.101: 99.8498% ( 1) 00:19:19.995 22.756 - 22.850: 99.8578% ( 1) 00:19:19.995 23.230 - 23.324: 99.8657% ( 1) 00:19:19.995 23.514 - 23.609: 99.8736% ( 1) 00:19:19.995 25.031 - 25.221: 99.8815% ( 1) 00:19:19.995 29.772 - 29.961: 99.8894% ( 1) 00:19:19.995 3980.705 - 4004.978: 99.9684% ( 10) 00:19:19.996 4004.978 - 4029.250: 100.0000% ( 4) 00:19:19.996 00:19:19.996 Complete histogram 00:19:19.996 ================== 00:19:19.996 Range in us Cumulative Count 00:19:19.996 2.062 - 2.074: 7.3178% ( 926) 00:19:19.996 2.074 - 2.086: 30.9704% ( 2993) 00:19:19.996 2.086 - 2.098: 33.3254% ( 298) 00:19:19.996 2.098 - 2.110: 47.8821% ( 1842) 00:19:19.996 2.110 - 2.121: 60.7634% ( 1630) 00:19:19.996 2.121 - 2.133: 62.4388% ( 212) 00:19:19.996 2.133 - 2.145: 68.3341% ( 746) 00:19:19.996 2.145 - 2.157: 73.2891% ( 627) 00:19:19.996 2.157 - 2.169: 74.2848% ( 126) 00:19:19.996 2.169 - 2.181: 79.2161% ( 624) 00:19:19.996 2.181 - 2.193: 82.3850% ( 401) 00:19:19.996 2.193 - 2.204: 82.9777% ( 75) 00:19:19.996 2.204 - 2.216: 84.7558% ( 225) 00:19:19.996 2.216 - 2.228: 87.1266% ( 300) 00:19:19.996 2.228 - 2.240: 89.1497% ( 256) 00:19:19.996 2.240 - 2.252: 91.4572% ( 292) 00:19:19.996 2.252 - 2.264: 92.9904% ( 194) 00:19:19.996 2.264 - 2.276: 93.3697% ( 48) 00:19:19.996 2.276 - 2.287: 93.7569% ( 49) 00:19:19.996 2.287 - 2.299: 94.1283% ( 47) 00:19:19.996 2.299 - 2.311: 94.6025% ( 60) 00:19:19.996 2.311 - 2.323: 94.9186% ( 40) 00:19:19.996 2.323 - 2.335: 94.9739% ( 7) 00:19:19.996 2.335 - 2.347: 95.0925% ( 15) 00:19:19.996 2.347 - 2.359: 95.1241% ( 4) 00:19:19.996 2.359 - 2.370: 95.1715% ( 6) 00:19:19.996 2.370 - 2.382: 95.3295% ( 20) 00:19:19.996 2.382 - 2.394: 95.5824% ( 32) 00:19:19.996 2.394 - 2.406: 95.8195% ( 30) 00:19:19.996 2.406 - 2.418: 95.9697% ( 19) 00:19:19.996 2.418 - 2.430: 96.1593% ( 24) 00:19:19.996 2.430 - 2.441: 96.3411% ( 23) 00:19:19.996 2.441 - 2.453: 96.5465% ( 26) 00:19:19.996 2.453 - 2.465: 96.7204% ( 22) 00:19:19.996 2.465 - 2.477: 96.9417% ( 28) 00:19:19.996 2.477 - 2.489: 97.1155% ( 22) 00:19:19.996 2.489 - 2.501: 97.3368% ( 28) 00:19:19.996 2.501 - 2.513: 97.5186% ( 23) 00:19:19.996 2.513 - 2.524: 97.6292% ( 14) 00:19:19.996 2.524 - 2.536: 97.7319% ( 13) 00:19:19.996 2.536 - 2.548: 97.8110% ( 10) 00:19:19.996 2.548 - 2.560: 97.9216% ( 14) 00:19:19.996 2.560 - 2.572: 97.9769% ( 7) 00:19:19.996 2.572 - 2.584: 98.0480% ( 9) 00:19:19.996 2.584 - 2.596: 98.0718% ( 3) 00:19:19.996 2.596 - 2.607: 98.1113% ( 5) 00:19:19.996 2.607 - 2.619: 98.1824% ( 9) 00:19:19.996 2.619 - 2.631: 98.2219% ( 5) 00:19:19.996 2.631 - 2.643: 98.2456% ( 3) 00:19:19.996 2.643 - 2.655: 98.2772% ( 4) 00:19:19.996 2.655 - 2.667: 98.2930% ( 2) 00:19:19.996 2.690 - 2.702: 98.3088% ( 2) 00:19:19.996 2.726 - 2.738: 98.3167% ( 1) 00:19:19.996 2.738 - 2.750: 98.3325% ( 2) 00:19:19.996 2.773 - 2.785: 98.3404% ( 1) 00:19:19.996 2.785 - 2.797: 98.3483% ( 1) 00:19:19.996 2.797 - 2.809: 98.3563% ( 1) 00:19:19.996 2.809 - 2.821: 98.3642% ( 1) 00:19:19.996 2.844 - 2.856: 98.3800% ( 2) 00:19:19.996 3.271 - 3.295: 98.3879% ( 1) 00:19:19.996 3.319 - 3.342: 98.3958% ( 1) 00:19:19.996 3.484 - 3.508: 9[2024-10-08 20:46:48.707223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:19.996 8.4116% ( 2) 00:19:19.996 3.508 - 3.532: 98.4274% ( 2) 00:19:19.996 3.532 - 3.556: 98.4353% ( 1) 00:19:19.996 3.556 - 3.579: 98.4511% ( 2) 00:19:19.996 3.579 - 3.603: 98.4590% ( 1) 00:19:19.996 3.603 - 3.627: 98.4669% ( 1) 00:19:19.996 3.627 - 3.650: 98.4748% ( 1) 00:19:19.996 3.674 - 3.698: 98.4827% ( 1) 00:19:19.996 3.698 - 3.721: 98.4906% ( 1) 00:19:19.996 3.721 - 3.745: 98.5143% ( 3) 00:19:19.996 3.769 - 3.793: 98.5222% ( 1) 00:19:19.996 3.793 - 3.816: 98.5301% ( 1) 00:19:19.996 3.840 - 3.864: 98.5380% ( 1) 00:19:19.996 3.864 - 3.887: 98.5459% ( 1) 00:19:19.996 3.887 - 3.911: 98.5617% ( 2) 00:19:19.996 3.911 - 3.935: 98.5696% ( 1) 00:19:19.996 3.935 - 3.959: 98.5775% ( 1) 00:19:19.996 3.982 - 4.006: 98.5854% ( 1) 00:19:19.996 4.077 - 4.101: 98.6091% ( 3) 00:19:19.996 4.456 - 4.480: 98.6170% ( 1) 00:19:19.996 5.570 - 5.594: 98.6249% ( 1) 00:19:19.996 5.879 - 5.902: 98.6328% ( 1) 00:19:19.996 6.044 - 6.068: 98.6407% ( 1) 00:19:19.996 6.163 - 6.210: 98.6486% ( 1) 00:19:19.996 6.400 - 6.447: 98.6566% ( 1) 00:19:19.996 6.542 - 6.590: 98.6645% ( 1) 00:19:19.996 6.637 - 6.684: 98.6724% ( 1) 00:19:19.996 7.016 - 7.064: 98.6803% ( 1) 00:19:19.996 7.396 - 7.443: 98.6882% ( 1) 00:19:19.996 7.585 - 7.633: 98.7040% ( 2) 00:19:19.996 8.107 - 8.154: 98.7119% ( 1) 00:19:19.996 8.201 - 8.249: 98.7198% ( 1) 00:19:19.996 8.486 - 8.533: 98.7277% ( 1) 00:19:19.996 8.533 - 8.581: 98.7356% ( 1) 00:19:19.996 10.714 - 10.761: 98.7435% ( 1) 00:19:19.996 11.330 - 11.378: 98.7514% ( 1) 00:19:19.996 14.127 - 14.222: 98.7593% ( 1) 00:19:19.996 15.360 - 15.455: 98.7672% ( 1) 00:19:19.996 15.455 - 15.550: 98.7751% ( 1) 00:19:19.996 15.644 - 15.739: 98.8067% ( 4) 00:19:19.996 15.739 - 15.834: 98.8541% ( 6) 00:19:19.996 15.834 - 15.929: 98.8699% ( 2) 00:19:19.996 15.929 - 16.024: 98.8936% ( 3) 00:19:19.996 16.024 - 16.119: 98.9252% ( 4) 00:19:19.996 16.119 - 16.213: 98.9648% ( 5) 00:19:19.996 16.213 - 16.308: 99.0201% ( 7) 00:19:19.996 16.308 - 16.403: 99.0517% ( 4) 00:19:19.996 16.403 - 16.498: 99.0754% ( 3) 00:19:19.996 16.498 - 16.593: 99.1070% ( 4) 00:19:19.996 16.593 - 16.687: 99.1623% ( 7) 00:19:19.996 16.687 - 16.782: 99.2018% ( 5) 00:19:19.996 16.782 - 16.877: 99.2334% ( 4) 00:19:19.996 16.877 - 16.972: 99.2572% ( 3) 00:19:19.996 16.972 - 17.067: 99.2730% ( 2) 00:19:19.996 17.067 - 17.161: 99.2888% ( 2) 00:19:19.996 17.256 - 17.351: 99.2967% ( 1) 00:19:19.996 17.351 - 17.446: 99.3204% ( 3) 00:19:19.996 17.446 - 17.541: 99.3283% ( 1) 00:19:19.996 18.868 - 18.963: 99.3362% ( 1) 00:19:19.996 21.049 - 21.144: 99.3441% ( 1) 00:19:19.996 21.997 - 22.092: 99.3520% ( 1) 00:19:19.996 26.738 - 26.927: 99.3599% ( 1) 00:19:19.996 1043.721 - 1049.790: 99.3678% ( 1) 00:19:19.996 3956.433 - 3980.705: 99.3757% ( 1) 00:19:19.996 3980.705 - 4004.978: 99.8340% ( 58) 00:19:19.996 4004.978 - 4029.250: 100.0000% ( 21) 00:19:19.996 00:19:20.254 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:20.254 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:20.254 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:20.254 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:20.254 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:20.513 [ 00:19:20.513 { 00:19:20.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:20.514 "subtype": "Discovery", 00:19:20.514 "listen_addresses": [], 00:19:20.514 "allow_any_host": true, 00:19:20.514 "hosts": [] 00:19:20.514 }, 00:19:20.514 { 00:19:20.514 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:20.514 "subtype": "NVMe", 00:19:20.514 "listen_addresses": [ 00:19:20.514 { 00:19:20.514 "trtype": "VFIOUSER", 00:19:20.514 "adrfam": "IPv4", 00:19:20.514 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:20.514 "trsvcid": "0" 00:19:20.514 } 00:19:20.514 ], 00:19:20.514 "allow_any_host": true, 00:19:20.514 "hosts": [], 00:19:20.514 "serial_number": "SPDK1", 00:19:20.514 "model_number": "SPDK bdev Controller", 00:19:20.514 "max_namespaces": 32, 00:19:20.514 "min_cntlid": 1, 00:19:20.514 "max_cntlid": 65519, 00:19:20.514 "namespaces": [ 00:19:20.514 { 00:19:20.514 "nsid": 1, 00:19:20.514 "bdev_name": "Malloc1", 00:19:20.514 "name": "Malloc1", 00:19:20.514 "nguid": "7FE534E809334FBF96710DB60A68D6B7", 00:19:20.514 "uuid": "7fe534e8-0933-4fbf-9671-0db60a68d6b7" 00:19:20.514 } 00:19:20.514 ] 00:19:20.514 }, 00:19:20.514 { 00:19:20.514 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:20.514 "subtype": "NVMe", 00:19:20.514 "listen_addresses": [ 00:19:20.514 { 00:19:20.514 "trtype": "VFIOUSER", 00:19:20.514 "adrfam": "IPv4", 00:19:20.514 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:20.514 "trsvcid": "0" 00:19:20.514 } 00:19:20.514 ], 00:19:20.514 "allow_any_host": true, 00:19:20.514 "hosts": [], 00:19:20.514 "serial_number": "SPDK2", 00:19:20.514 "model_number": "SPDK bdev Controller", 00:19:20.514 "max_namespaces": 32, 00:19:20.514 "min_cntlid": 1, 00:19:20.514 "max_cntlid": 65519, 00:19:20.514 "namespaces": [ 00:19:20.514 { 00:19:20.514 "nsid": 1, 00:19:20.514 "bdev_name": "Malloc2", 00:19:20.514 "name": "Malloc2", 00:19:20.514 "nguid": "23A686104F4D4E3B877B207AF380A87A", 00:19:20.514 "uuid": "23a68610-4f4d-4e3b-877b-207af380a87a" 00:19:20.514 } 00:19:20.514 ] 00:19:20.514 } 00:19:20.514 ] 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1700575 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:20.514 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:20.514 [2024-10-08 20:46:49.246731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:21.081 Malloc3 00:19:21.081 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:21.340 [2024-10-08 20:46:49.974083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:21.340 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:21.340 Asynchronous Event Request test 00:19:21.340 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:21.340 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:21.340 Registering asynchronous event callbacks... 00:19:21.340 Starting namespace attribute notice tests for all controllers... 00:19:21.340 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:21.340 aer_cb - Changed Namespace 00:19:21.340 Cleaning up... 00:19:21.598 [ 00:19:21.598 { 00:19:21.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:21.598 "subtype": "Discovery", 00:19:21.598 "listen_addresses": [], 00:19:21.598 "allow_any_host": true, 00:19:21.598 "hosts": [] 00:19:21.598 }, 00:19:21.598 { 00:19:21.598 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:21.598 "subtype": "NVMe", 00:19:21.598 "listen_addresses": [ 00:19:21.598 { 00:19:21.598 "trtype": "VFIOUSER", 00:19:21.598 "adrfam": "IPv4", 00:19:21.598 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:21.598 "trsvcid": "0" 00:19:21.598 } 00:19:21.598 ], 00:19:21.598 "allow_any_host": true, 00:19:21.598 "hosts": [], 00:19:21.598 "serial_number": "SPDK1", 00:19:21.598 "model_number": "SPDK bdev Controller", 00:19:21.598 "max_namespaces": 32, 00:19:21.598 "min_cntlid": 1, 00:19:21.598 "max_cntlid": 65519, 00:19:21.598 "namespaces": [ 00:19:21.598 { 00:19:21.598 "nsid": 1, 00:19:21.598 "bdev_name": "Malloc1", 00:19:21.598 "name": "Malloc1", 00:19:21.598 "nguid": "7FE534E809334FBF96710DB60A68D6B7", 00:19:21.598 "uuid": "7fe534e8-0933-4fbf-9671-0db60a68d6b7" 00:19:21.598 }, 00:19:21.598 { 00:19:21.598 "nsid": 2, 00:19:21.598 "bdev_name": "Malloc3", 00:19:21.598 "name": "Malloc3", 00:19:21.598 "nguid": "45D6924D39E74437B2649B4DC29F34D6", 00:19:21.598 "uuid": "45d6924d-39e7-4437-b264-9b4dc29f34d6" 00:19:21.598 } 00:19:21.598 ] 00:19:21.598 }, 00:19:21.598 { 00:19:21.598 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:21.598 "subtype": "NVMe", 00:19:21.598 "listen_addresses": [ 00:19:21.598 { 00:19:21.598 "trtype": "VFIOUSER", 00:19:21.598 "adrfam": "IPv4", 00:19:21.598 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:21.598 "trsvcid": "0" 00:19:21.598 } 00:19:21.598 ], 00:19:21.598 "allow_any_host": true, 00:19:21.598 "hosts": [], 00:19:21.598 "serial_number": "SPDK2", 00:19:21.598 "model_number": "SPDK bdev Controller", 00:19:21.598 "max_namespaces": 32, 00:19:21.598 "min_cntlid": 1, 00:19:21.598 "max_cntlid": 65519, 00:19:21.598 "namespaces": [ 00:19:21.598 { 00:19:21.598 "nsid": 1, 00:19:21.598 "bdev_name": "Malloc2", 00:19:21.598 "name": "Malloc2", 00:19:21.598 "nguid": "23A686104F4D4E3B877B207AF380A87A", 00:19:21.598 "uuid": "23a68610-4f4d-4e3b-877b-207af380a87a" 00:19:21.598 } 00:19:21.598 ] 00:19:21.598 } 00:19:21.598 ] 00:19:21.598 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1700575 00:19:21.598 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:21.598 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:21.598 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:21.598 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:21.598 [2024-10-08 20:46:50.345233] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:21.598 [2024-10-08 20:46:50.345279] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700721 ] 00:19:21.860 [2024-10-08 20:46:50.378991] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:21.860 [2024-10-08 20:46:50.387007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:21.860 [2024-10-08 20:46:50.387040] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8dac1ad000 00:19:21.860 [2024-10-08 20:46:50.388017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.389004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.390008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.391013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.392034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.393042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.394042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.395065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.860 [2024-10-08 20:46:50.396060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:21.860 [2024-10-08 20:46:50.396083] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8dac1a2000 00:19:21.860 [2024-10-08 20:46:50.397200] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:21.860 [2024-10-08 20:46:50.416076] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:21.860 [2024-10-08 20:46:50.416125] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:21.860 [2024-10-08 20:46:50.418220] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:21.860 [2024-10-08 20:46:50.418277] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:21.860 [2024-10-08 20:46:50.418371] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:21.860 [2024-10-08 20:46:50.418405] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:21.860 [2024-10-08 20:46:50.418415] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:21.860 [2024-10-08 20:46:50.419218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:21.860 [2024-10-08 20:46:50.419240] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:21.860 [2024-10-08 20:46:50.419253] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:21.860 [2024-10-08 20:46:50.420224] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:21.860 [2024-10-08 20:46:50.420244] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:21.860 [2024-10-08 20:46:50.420258] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.421229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:21.860 [2024-10-08 20:46:50.421250] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.422235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:21.860 [2024-10-08 20:46:50.422255] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:21.860 [2024-10-08 20:46:50.422264] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.422275] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.422385] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:21.860 [2024-10-08 20:46:50.422393] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.422401] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:21.860 [2024-10-08 20:46:50.423243] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:21.860 [2024-10-08 20:46:50.424243] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:21.860 [2024-10-08 20:46:50.425254] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:21.860 [2024-10-08 20:46:50.426251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:21.860 [2024-10-08 20:46:50.426328] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:21.860 [2024-10-08 20:46:50.427267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:21.860 [2024-10-08 20:46:50.427287] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:21.860 [2024-10-08 20:46:50.427296] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:21.860 [2024-10-08 20:46:50.427319] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:21.860 [2024-10-08 20:46:50.427334] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:21.860 [2024-10-08 20:46:50.427357] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.860 [2024-10-08 20:46:50.427366] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.860 [2024-10-08 20:46:50.427373] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.427392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.433681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.433704] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:21.861 [2024-10-08 20:46:50.433714] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:21.861 [2024-10-08 20:46:50.433721] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:21.861 [2024-10-08 20:46:50.433729] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:21.861 [2024-10-08 20:46:50.433736] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:21.861 [2024-10-08 20:46:50.433744] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:21.861 [2024-10-08 20:46:50.433752] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.433771] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.433788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.441664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.441688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.861 [2024-10-08 20:46:50.441702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.861 [2024-10-08 20:46:50.441713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.861 [2024-10-08 20:46:50.441725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.861 [2024-10-08 20:46:50.441734] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.441754] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.441770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.449661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.449679] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:21.861 [2024-10-08 20:46:50.449689] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.449700] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.449715] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.449731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.457661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.457733] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.457750] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.457764] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:21.861 [2024-10-08 20:46:50.457773] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:21.861 [2024-10-08 20:46:50.457779] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.457789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.465663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.465688] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:21.861 [2024-10-08 20:46:50.465711] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.465727] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.465740] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.861 [2024-10-08 20:46:50.465749] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.861 [2024-10-08 20:46:50.465755] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.465765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.473662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.473693] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.473713] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.473727] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.861 [2024-10-08 20:46:50.473736] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.861 [2024-10-08 20:46:50.473742] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.473752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.481661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.481684] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481697] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481712] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481722] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481730] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481738] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481746] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:21.861 [2024-10-08 20:46:50.481754] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:21.861 [2024-10-08 20:46:50.481762] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:21.861 [2024-10-08 20:46:50.481787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.489663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.489689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.497661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.497686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.505678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.505707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.513662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.513708] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:21.861 [2024-10-08 20:46:50.513719] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:21.861 [2024-10-08 20:46:50.513726] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:21.861 [2024-10-08 20:46:50.513732] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:21.861 [2024-10-08 20:46:50.513744] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:21.861 [2024-10-08 20:46:50.513755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:21.861 [2024-10-08 20:46:50.513767] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:21.861 [2024-10-08 20:46:50.513776] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:21.861 [2024-10-08 20:46:50.513782] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.513791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.513802] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:21.861 [2024-10-08 20:46:50.513811] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.861 [2024-10-08 20:46:50.513817] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.513825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.513837] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:21.861 [2024-10-08 20:46:50.513845] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:21.861 [2024-10-08 20:46:50.513851] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.861 [2024-10-08 20:46:50.513860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:21.861 [2024-10-08 20:46:50.521668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.521708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.521727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:21.861 [2024-10-08 20:46:50.521739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:21.862 ===================================================== 00:19:21.862 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:21.862 ===================================================== 00:19:21.862 Controller Capabilities/Features 00:19:21.862 ================================ 00:19:21.862 Vendor ID: 4e58 00:19:21.862 Subsystem Vendor ID: 4e58 00:19:21.862 Serial Number: SPDK2 00:19:21.862 Model Number: SPDK bdev Controller 00:19:21.862 Firmware Version: 25.01 00:19:21.862 Recommended Arb Burst: 6 00:19:21.862 IEEE OUI Identifier: 8d 6b 50 00:19:21.862 Multi-path I/O 00:19:21.862 May have multiple subsystem ports: Yes 00:19:21.862 May have multiple controllers: Yes 00:19:21.862 Associated with SR-IOV VF: No 00:19:21.862 Max Data Transfer Size: 131072 00:19:21.862 Max Number of Namespaces: 32 00:19:21.862 Max Number of I/O Queues: 127 00:19:21.862 NVMe Specification Version (VS): 1.3 00:19:21.862 NVMe Specification Version (Identify): 1.3 00:19:21.862 Maximum Queue Entries: 256 00:19:21.862 Contiguous Queues Required: Yes 00:19:21.862 Arbitration Mechanisms Supported 00:19:21.862 Weighted Round Robin: Not Supported 00:19:21.862 Vendor Specific: Not Supported 00:19:21.862 Reset Timeout: 15000 ms 00:19:21.862 Doorbell Stride: 4 bytes 00:19:21.862 NVM Subsystem Reset: Not Supported 00:19:21.862 Command Sets Supported 00:19:21.862 NVM Command Set: Supported 00:19:21.862 Boot Partition: Not Supported 00:19:21.862 Memory Page Size Minimum: 4096 bytes 00:19:21.862 Memory Page Size Maximum: 4096 bytes 00:19:21.862 Persistent Memory Region: Not Supported 00:19:21.862 Optional Asynchronous Events Supported 00:19:21.862 Namespace Attribute Notices: Supported 00:19:21.862 Firmware Activation Notices: Not Supported 00:19:21.862 ANA Change Notices: Not Supported 00:19:21.862 PLE Aggregate Log Change Notices: Not Supported 00:19:21.862 LBA Status Info Alert Notices: Not Supported 00:19:21.862 EGE Aggregate Log Change Notices: Not Supported 00:19:21.862 Normal NVM Subsystem Shutdown event: Not Supported 00:19:21.862 Zone Descriptor Change Notices: Not Supported 00:19:21.862 Discovery Log Change Notices: Not Supported 00:19:21.862 Controller Attributes 00:19:21.862 128-bit Host Identifier: Supported 00:19:21.862 Non-Operational Permissive Mode: Not Supported 00:19:21.862 NVM Sets: Not Supported 00:19:21.862 Read Recovery Levels: Not Supported 00:19:21.862 Endurance Groups: Not Supported 00:19:21.862 Predictable Latency Mode: Not Supported 00:19:21.862 Traffic Based Keep ALive: Not Supported 00:19:21.862 Namespace Granularity: Not Supported 00:19:21.862 SQ Associations: Not Supported 00:19:21.862 UUID List: Not Supported 00:19:21.862 Multi-Domain Subsystem: Not Supported 00:19:21.862 Fixed Capacity Management: Not Supported 00:19:21.862 Variable Capacity Management: Not Supported 00:19:21.862 Delete Endurance Group: Not Supported 00:19:21.862 Delete NVM Set: Not Supported 00:19:21.862 Extended LBA Formats Supported: Not Supported 00:19:21.862 Flexible Data Placement Supported: Not Supported 00:19:21.862 00:19:21.862 Controller Memory Buffer Support 00:19:21.862 ================================ 00:19:21.862 Supported: No 00:19:21.862 00:19:21.862 Persistent Memory Region Support 00:19:21.862 ================================ 00:19:21.862 Supported: No 00:19:21.862 00:19:21.862 Admin Command Set Attributes 00:19:21.862 ============================ 00:19:21.862 Security Send/Receive: Not Supported 00:19:21.862 Format NVM: Not Supported 00:19:21.862 Firmware Activate/Download: Not Supported 00:19:21.862 Namespace Management: Not Supported 00:19:21.862 Device Self-Test: Not Supported 00:19:21.862 Directives: Not Supported 00:19:21.862 NVMe-MI: Not Supported 00:19:21.862 Virtualization Management: Not Supported 00:19:21.862 Doorbell Buffer Config: Not Supported 00:19:21.862 Get LBA Status Capability: Not Supported 00:19:21.862 Command & Feature Lockdown Capability: Not Supported 00:19:21.862 Abort Command Limit: 4 00:19:21.862 Async Event Request Limit: 4 00:19:21.862 Number of Firmware Slots: N/A 00:19:21.862 Firmware Slot 1 Read-Only: N/A 00:19:21.862 Firmware Activation Without Reset: N/A 00:19:21.862 Multiple Update Detection Support: N/A 00:19:21.862 Firmware Update Granularity: No Information Provided 00:19:21.862 Per-Namespace SMART Log: No 00:19:21.862 Asymmetric Namespace Access Log Page: Not Supported 00:19:21.862 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:21.862 Command Effects Log Page: Supported 00:19:21.862 Get Log Page Extended Data: Supported 00:19:21.862 Telemetry Log Pages: Not Supported 00:19:21.862 Persistent Event Log Pages: Not Supported 00:19:21.862 Supported Log Pages Log Page: May Support 00:19:21.862 Commands Supported & Effects Log Page: Not Supported 00:19:21.862 Feature Identifiers & Effects Log Page:May Support 00:19:21.862 NVMe-MI Commands & Effects Log Page: May Support 00:19:21.862 Data Area 4 for Telemetry Log: Not Supported 00:19:21.862 Error Log Page Entries Supported: 128 00:19:21.862 Keep Alive: Supported 00:19:21.862 Keep Alive Granularity: 10000 ms 00:19:21.862 00:19:21.862 NVM Command Set Attributes 00:19:21.862 ========================== 00:19:21.862 Submission Queue Entry Size 00:19:21.862 Max: 64 00:19:21.862 Min: 64 00:19:21.862 Completion Queue Entry Size 00:19:21.862 Max: 16 00:19:21.862 Min: 16 00:19:21.862 Number of Namespaces: 32 00:19:21.862 Compare Command: Supported 00:19:21.862 Write Uncorrectable Command: Not Supported 00:19:21.862 Dataset Management Command: Supported 00:19:21.862 Write Zeroes Command: Supported 00:19:21.862 Set Features Save Field: Not Supported 00:19:21.862 Reservations: Not Supported 00:19:21.862 Timestamp: Not Supported 00:19:21.862 Copy: Supported 00:19:21.862 Volatile Write Cache: Present 00:19:21.862 Atomic Write Unit (Normal): 1 00:19:21.862 Atomic Write Unit (PFail): 1 00:19:21.862 Atomic Compare & Write Unit: 1 00:19:21.862 Fused Compare & Write: Supported 00:19:21.862 Scatter-Gather List 00:19:21.862 SGL Command Set: Supported (Dword aligned) 00:19:21.862 SGL Keyed: Not Supported 00:19:21.862 SGL Bit Bucket Descriptor: Not Supported 00:19:21.862 SGL Metadata Pointer: Not Supported 00:19:21.862 Oversized SGL: Not Supported 00:19:21.862 SGL Metadata Address: Not Supported 00:19:21.862 SGL Offset: Not Supported 00:19:21.862 Transport SGL Data Block: Not Supported 00:19:21.862 Replay Protected Memory Block: Not Supported 00:19:21.862 00:19:21.862 Firmware Slot Information 00:19:21.862 ========================= 00:19:21.862 Active slot: 1 00:19:21.862 Slot 1 Firmware Revision: 25.01 00:19:21.862 00:19:21.862 00:19:21.862 Commands Supported and Effects 00:19:21.862 ============================== 00:19:21.862 Admin Commands 00:19:21.862 -------------- 00:19:21.862 Get Log Page (02h): Supported 00:19:21.862 Identify (06h): Supported 00:19:21.862 Abort (08h): Supported 00:19:21.862 Set Features (09h): Supported 00:19:21.862 Get Features (0Ah): Supported 00:19:21.862 Asynchronous Event Request (0Ch): Supported 00:19:21.862 Keep Alive (18h): Supported 00:19:21.862 I/O Commands 00:19:21.862 ------------ 00:19:21.862 Flush (00h): Supported LBA-Change 00:19:21.862 Write (01h): Supported LBA-Change 00:19:21.862 Read (02h): Supported 00:19:21.862 Compare (05h): Supported 00:19:21.862 Write Zeroes (08h): Supported LBA-Change 00:19:21.862 Dataset Management (09h): Supported LBA-Change 00:19:21.862 Copy (19h): Supported LBA-Change 00:19:21.862 00:19:21.862 Error Log 00:19:21.862 ========= 00:19:21.862 00:19:21.862 Arbitration 00:19:21.862 =========== 00:19:21.862 Arbitration Burst: 1 00:19:21.862 00:19:21.862 Power Management 00:19:21.862 ================ 00:19:21.862 Number of Power States: 1 00:19:21.862 Current Power State: Power State #0 00:19:21.862 Power State #0: 00:19:21.862 Max Power: 0.00 W 00:19:21.862 Non-Operational State: Operational 00:19:21.862 Entry Latency: Not Reported 00:19:21.862 Exit Latency: Not Reported 00:19:21.862 Relative Read Throughput: 0 00:19:21.862 Relative Read Latency: 0 00:19:21.862 Relative Write Throughput: 0 00:19:21.862 Relative Write Latency: 0 00:19:21.862 Idle Power: Not Reported 00:19:21.862 Active Power: Not Reported 00:19:21.862 Non-Operational Permissive Mode: Not Supported 00:19:21.862 00:19:21.862 Health Information 00:19:21.862 ================== 00:19:21.862 Critical Warnings: 00:19:21.862 Available Spare Space: OK 00:19:21.862 Temperature: OK 00:19:21.862 Device Reliability: OK 00:19:21.862 Read Only: No 00:19:21.862 Volatile Memory Backup: OK 00:19:21.862 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:21.862 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:21.862 Available Spare: 0% 00:19:21.862 Available Sp[2024-10-08 20:46:50.521856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:21.862 [2024-10-08 20:46:50.529677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:21.862 [2024-10-08 20:46:50.529729] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:21.862 [2024-10-08 20:46:50.529748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.862 [2024-10-08 20:46:50.529759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.863 [2024-10-08 20:46:50.529769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.863 [2024-10-08 20:46:50.529778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.863 [2024-10-08 20:46:50.529845] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:21.863 [2024-10-08 20:46:50.529868] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:21.863 [2024-10-08 20:46:50.530854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.863 [2024-10-08 20:46:50.530929] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:21.863 [2024-10-08 20:46:50.530953] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:21.863 [2024-10-08 20:46:50.531863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:21.863 [2024-10-08 20:46:50.531890] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:21.863 [2024-10-08 20:46:50.531973] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:21.863 [2024-10-08 20:46:50.533153] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:21.863 are Threshold: 0% 00:19:21.863 Life Percentage Used: 0% 00:19:21.863 Data Units Read: 0 00:19:21.863 Data Units Written: 0 00:19:21.863 Host Read Commands: 0 00:19:21.863 Host Write Commands: 0 00:19:21.863 Controller Busy Time: 0 minutes 00:19:21.863 Power Cycles: 0 00:19:21.863 Power On Hours: 0 hours 00:19:21.863 Unsafe Shutdowns: 0 00:19:21.863 Unrecoverable Media Errors: 0 00:19:21.863 Lifetime Error Log Entries: 0 00:19:21.863 Warning Temperature Time: 0 minutes 00:19:21.863 Critical Temperature Time: 0 minutes 00:19:21.863 00:19:21.863 Number of Queues 00:19:21.863 ================ 00:19:21.863 Number of I/O Submission Queues: 127 00:19:21.863 Number of I/O Completion Queues: 127 00:19:21.863 00:19:21.863 Active Namespaces 00:19:21.863 ================= 00:19:21.863 Namespace ID:1 00:19:21.863 Error Recovery Timeout: Unlimited 00:19:21.863 Command Set Identifier: NVM (00h) 00:19:21.863 Deallocate: Supported 00:19:21.863 Deallocated/Unwritten Error: Not Supported 00:19:21.863 Deallocated Read Value: Unknown 00:19:21.863 Deallocate in Write Zeroes: Not Supported 00:19:21.863 Deallocated Guard Field: 0xFFFF 00:19:21.863 Flush: Supported 00:19:21.863 Reservation: Supported 00:19:21.863 Namespace Sharing Capabilities: Multiple Controllers 00:19:21.863 Size (in LBAs): 131072 (0GiB) 00:19:21.863 Capacity (in LBAs): 131072 (0GiB) 00:19:21.863 Utilization (in LBAs): 131072 (0GiB) 00:19:21.863 NGUID: 23A686104F4D4E3B877B207AF380A87A 00:19:21.863 UUID: 23a68610-4f4d-4e3b-877b-207af380a87a 00:19:21.863 Thin Provisioning: Not Supported 00:19:21.863 Per-NS Atomic Units: Yes 00:19:21.863 Atomic Boundary Size (Normal): 0 00:19:21.863 Atomic Boundary Size (PFail): 0 00:19:21.863 Atomic Boundary Offset: 0 00:19:21.863 Maximum Single Source Range Length: 65535 00:19:21.863 Maximum Copy Length: 65535 00:19:21.863 Maximum Source Range Count: 1 00:19:21.863 NGUID/EUI64 Never Reused: No 00:19:21.863 Namespace Write Protected: No 00:19:21.863 Number of LBA Formats: 1 00:19:21.863 Current LBA Format: LBA Format #00 00:19:21.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:21.863 00:19:21.863 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:22.146 [2024-10-08 20:46:50.802374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:27.452 Initializing NVMe Controllers 00:19:27.452 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:27.452 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:27.452 Initialization complete. Launching workers. 00:19:27.452 ======================================================== 00:19:27.452 Latency(us) 00:19:27.452 Device Information : IOPS MiB/s Average min max 00:19:27.452 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31409.00 122.69 4075.00 1212.18 10375.51 00:19:27.452 ======================================================== 00:19:27.452 Total : 31409.00 122.69 4075.00 1212.18 10375.51 00:19:27.452 00:19:27.452 [2024-10-08 20:46:55.903028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:27.452 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:27.452 [2024-10-08 20:46:56.206880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:32.720 Initializing NVMe Controllers 00:19:32.720 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:32.720 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:32.720 Initialization complete. Launching workers. 00:19:32.720 ======================================================== 00:19:32.721 Latency(us) 00:19:32.721 Device Information : IOPS MiB/s Average min max 00:19:32.721 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29465.80 115.10 4346.65 1262.09 11396.66 00:19:32.721 ======================================================== 00:19:32.721 Total : 29465.80 115.10 4346.65 1262.09 11396.66 00:19:32.721 00:19:32.721 [2024-10-08 20:47:01.227856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:32.721 20:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:32.721 [2024-10-08 20:47:01.463379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:37.991 [2024-10-08 20:47:06.602807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:37.991 Initializing NVMe Controllers 00:19:37.991 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:37.991 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:37.991 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:37.991 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:37.991 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:37.991 Initialization complete. Launching workers. 00:19:37.991 Starting thread on core 2 00:19:37.991 Starting thread on core 3 00:19:37.991 Starting thread on core 1 00:19:37.991 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:38.249 [2024-10-08 20:47:06.914298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:41.534 [2024-10-08 20:47:09.971994] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:41.534 Initializing NVMe Controllers 00:19:41.534 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:41.534 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:41.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:41.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:41.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:41.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:41.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:41.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:41.534 Initialization complete. Launching workers. 00:19:41.534 Starting thread on core 1 with urgent priority queue 00:19:41.534 Starting thread on core 2 with urgent priority queue 00:19:41.534 Starting thread on core 3 with urgent priority queue 00:19:41.534 Starting thread on core 0 with urgent priority queue 00:19:41.534 SPDK bdev Controller (SPDK2 ) core 0: 4410.67 IO/s 22.67 secs/100000 ios 00:19:41.534 SPDK bdev Controller (SPDK2 ) core 1: 5429.33 IO/s 18.42 secs/100000 ios 00:19:41.534 SPDK bdev Controller (SPDK2 ) core 2: 5551.00 IO/s 18.01 secs/100000 ios 00:19:41.534 SPDK bdev Controller (SPDK2 ) core 3: 5594.33 IO/s 17.88 secs/100000 ios 00:19:41.534 ======================================================== 00:19:41.534 00:19:41.534 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:41.793 [2024-10-08 20:47:10.316218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:41.793 Initializing NVMe Controllers 00:19:41.793 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:41.793 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:41.793 Namespace ID: 1 size: 0GB 00:19:41.793 Initialization complete. 00:19:41.793 INFO: using host memory buffer for IO 00:19:41.793 Hello world! 00:19:41.793 [2024-10-08 20:47:10.325378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:41.793 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:42.052 [2024-10-08 20:47:10.689945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:43.426 Initializing NVMe Controllers 00:19:43.426 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:43.426 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:43.426 Initialization complete. Launching workers. 00:19:43.426 submit (in ns) avg, min, max = 8384.8, 3527.8, 4021031.1 00:19:43.426 complete (in ns) avg, min, max = 28101.8, 2075.6, 4999398.9 00:19:43.426 00:19:43.426 Submit histogram 00:19:43.426 ================ 00:19:43.426 Range in us Cumulative Count 00:19:43.426 3.508 - 3.532: 0.0158% ( 2) 00:19:43.426 3.532 - 3.556: 0.2050% ( 24) 00:19:43.426 3.556 - 3.579: 1.8924% ( 214) 00:19:43.426 3.579 - 3.603: 5.1885% ( 418) 00:19:43.426 3.603 - 3.627: 10.4558% ( 668) 00:19:43.426 3.627 - 3.650: 21.7316% ( 1430) 00:19:43.426 3.650 - 3.674: 32.6920% ( 1390) 00:19:43.426 3.674 - 3.698: 43.7865% ( 1407) 00:19:43.426 3.698 - 3.721: 52.1763% ( 1064) 00:19:43.426 3.721 - 3.745: 58.4608% ( 797) 00:19:43.426 3.745 - 3.769: 62.7740% ( 547) 00:19:43.426 3.769 - 3.793: 67.1345% ( 553) 00:19:43.426 3.793 - 3.816: 70.4148% ( 416) 00:19:43.426 3.816 - 3.840: 73.1194% ( 343) 00:19:43.426 3.840 - 3.864: 75.8713% ( 349) 00:19:43.426 3.864 - 3.887: 79.2304% ( 426) 00:19:43.426 3.887 - 3.911: 83.0390% ( 483) 00:19:43.426 3.911 - 3.935: 85.8224% ( 353) 00:19:43.426 3.935 - 3.959: 87.9041% ( 264) 00:19:43.426 3.959 - 3.982: 89.6152% ( 217) 00:19:43.426 3.982 - 4.006: 91.1213% ( 191) 00:19:43.426 4.006 - 4.030: 92.2331% ( 141) 00:19:43.426 4.030 - 4.053: 93.0374% ( 102) 00:19:43.426 4.053 - 4.077: 93.6840% ( 82) 00:19:43.427 4.077 - 4.101: 94.3542% ( 85) 00:19:43.427 4.101 - 4.124: 94.7879% ( 55) 00:19:43.427 4.124 - 4.148: 95.1979% ( 52) 00:19:43.427 4.148 - 4.172: 95.5133% ( 40) 00:19:43.427 4.172 - 4.196: 95.8603% ( 44) 00:19:43.427 4.196 - 4.219: 96.0337% ( 22) 00:19:43.427 4.219 - 4.243: 96.1520% ( 15) 00:19:43.427 4.243 - 4.267: 96.3334% ( 23) 00:19:43.427 4.267 - 4.290: 96.3886% ( 7) 00:19:43.427 4.290 - 4.314: 96.4753% ( 11) 00:19:43.427 4.314 - 4.338: 96.5463% ( 9) 00:19:43.427 4.338 - 4.361: 96.6015% ( 7) 00:19:43.427 4.361 - 4.385: 96.6882% ( 11) 00:19:43.427 4.385 - 4.409: 96.7276% ( 5) 00:19:43.427 4.409 - 4.433: 96.7750% ( 6) 00:19:43.427 4.433 - 4.456: 96.8223% ( 6) 00:19:43.427 4.456 - 4.480: 96.8617% ( 5) 00:19:43.427 4.480 - 4.504: 96.8932% ( 4) 00:19:43.427 4.527 - 4.551: 96.9090% ( 2) 00:19:43.427 4.551 - 4.575: 96.9327% ( 3) 00:19:43.427 4.575 - 4.599: 96.9484% ( 2) 00:19:43.427 4.599 - 4.622: 96.9642% ( 2) 00:19:43.427 4.622 - 4.646: 96.9721% ( 1) 00:19:43.427 4.670 - 4.693: 96.9957% ( 3) 00:19:43.427 4.693 - 4.717: 97.0115% ( 2) 00:19:43.427 4.717 - 4.741: 97.0194% ( 1) 00:19:43.427 4.741 - 4.764: 97.0352% ( 2) 00:19:43.427 4.764 - 4.788: 97.0667% ( 4) 00:19:43.427 4.788 - 4.812: 97.0746% ( 1) 00:19:43.427 4.812 - 4.836: 97.1140% ( 5) 00:19:43.427 4.836 - 4.859: 97.1613% ( 6) 00:19:43.427 4.859 - 4.883: 97.2086% ( 6) 00:19:43.427 4.883 - 4.907: 97.2954% ( 11) 00:19:43.427 4.907 - 4.930: 97.3585% ( 8) 00:19:43.427 4.930 - 4.954: 97.3821% ( 3) 00:19:43.427 4.954 - 4.978: 97.4294% ( 6) 00:19:43.427 4.978 - 5.001: 97.4925% ( 8) 00:19:43.427 5.001 - 5.025: 97.5162% ( 3) 00:19:43.427 5.025 - 5.049: 97.5556% ( 5) 00:19:43.427 5.049 - 5.073: 97.6108% ( 7) 00:19:43.427 5.073 - 5.096: 97.6581% ( 6) 00:19:43.427 5.096 - 5.120: 97.6818% ( 3) 00:19:43.427 5.120 - 5.144: 97.7685% ( 11) 00:19:43.427 5.144 - 5.167: 97.8316% ( 8) 00:19:43.427 5.167 - 5.191: 97.8631% ( 4) 00:19:43.427 5.191 - 5.215: 97.8947% ( 4) 00:19:43.427 5.215 - 5.239: 97.9183% ( 3) 00:19:43.427 5.239 - 5.262: 97.9341% ( 2) 00:19:43.427 5.262 - 5.286: 97.9420% ( 1) 00:19:43.427 5.286 - 5.310: 97.9577% ( 2) 00:19:43.427 5.310 - 5.333: 97.9656% ( 1) 00:19:43.427 5.333 - 5.357: 97.9735% ( 1) 00:19:43.427 5.357 - 5.381: 97.9893% ( 2) 00:19:43.427 5.381 - 5.404: 97.9972% ( 1) 00:19:43.427 5.404 - 5.428: 98.0050% ( 1) 00:19:43.427 5.476 - 5.499: 98.0129% ( 1) 00:19:43.427 5.499 - 5.523: 98.0208% ( 1) 00:19:43.427 5.523 - 5.547: 98.0287% ( 1) 00:19:43.427 5.547 - 5.570: 98.0366% ( 1) 00:19:43.427 5.594 - 5.618: 98.0524% ( 2) 00:19:43.427 5.641 - 5.665: 98.0602% ( 1) 00:19:43.427 5.831 - 5.855: 98.0681% ( 1) 00:19:43.427 5.855 - 5.879: 98.0839% ( 2) 00:19:43.427 5.926 - 5.950: 98.0918% ( 1) 00:19:43.427 6.305 - 6.353: 98.0997% ( 1) 00:19:43.427 6.400 - 6.447: 98.1076% ( 1) 00:19:43.427 6.447 - 6.495: 98.1154% ( 1) 00:19:43.427 6.590 - 6.637: 98.1312% ( 2) 00:19:43.427 6.637 - 6.684: 98.1391% ( 1) 00:19:43.427 6.827 - 6.874: 98.1470% ( 1) 00:19:43.427 6.874 - 6.921: 98.1549% ( 1) 00:19:43.427 6.921 - 6.969: 98.1706% ( 2) 00:19:43.427 7.206 - 7.253: 98.1785% ( 1) 00:19:43.427 7.253 - 7.301: 98.1864% ( 1) 00:19:43.427 7.443 - 7.490: 98.1943% ( 1) 00:19:43.427 7.538 - 7.585: 98.2101% ( 2) 00:19:43.427 7.585 - 7.633: 98.2179% ( 1) 00:19:43.427 7.633 - 7.680: 98.2416% ( 3) 00:19:43.427 7.680 - 7.727: 98.2495% ( 1) 00:19:43.427 7.775 - 7.822: 98.2574% ( 1) 00:19:43.427 7.822 - 7.870: 98.2653% ( 1) 00:19:43.427 7.870 - 7.917: 98.2731% ( 1) 00:19:43.427 7.917 - 7.964: 98.2968% ( 3) 00:19:43.427 7.964 - 8.012: 98.3047% ( 1) 00:19:43.427 8.154 - 8.201: 98.3126% ( 1) 00:19:43.427 8.249 - 8.296: 98.3205% ( 1) 00:19:43.427 8.296 - 8.344: 98.3362% ( 2) 00:19:43.427 8.391 - 8.439: 98.3520% ( 2) 00:19:43.427 8.581 - 8.628: 98.3599% ( 1) 00:19:43.427 8.628 - 8.676: 98.3678% ( 1) 00:19:43.427 8.676 - 8.723: 98.3835% ( 2) 00:19:43.427 8.723 - 8.770: 98.3914% ( 1) 00:19:43.427 8.865 - 8.913: 98.3993% ( 1) 00:19:43.427 8.960 - 9.007: 98.4151% ( 2) 00:19:43.427 9.055 - 9.102: 98.4230% ( 1) 00:19:43.427 9.197 - 9.244: 98.4308% ( 1) 00:19:43.427 9.244 - 9.292: 98.4387% ( 1) 00:19:43.427 9.292 - 9.339: 98.4466% ( 1) 00:19:43.427 9.434 - 9.481: 98.4624% ( 2) 00:19:43.427 9.529 - 9.576: 98.4703% ( 1) 00:19:43.427 9.624 - 9.671: 98.4782% ( 1) 00:19:43.427 9.719 - 9.766: 98.4860% ( 1) 00:19:43.427 9.908 - 9.956: 98.5018% ( 2) 00:19:43.427 10.003 - 10.050: 98.5097% ( 1) 00:19:43.427 10.098 - 10.145: 98.5334% ( 3) 00:19:43.427 10.145 - 10.193: 98.5412% ( 1) 00:19:43.427 10.240 - 10.287: 98.5570% ( 2) 00:19:43.427 10.335 - 10.382: 98.5649% ( 1) 00:19:43.427 10.430 - 10.477: 98.5807% ( 2) 00:19:43.427 10.477 - 10.524: 98.5886% ( 1) 00:19:43.427 10.524 - 10.572: 98.5964% ( 1) 00:19:43.427 10.572 - 10.619: 98.6043% ( 1) 00:19:43.427 10.619 - 10.667: 98.6122% ( 1) 00:19:43.427 10.714 - 10.761: 98.6201% ( 1) 00:19:43.427 10.809 - 10.856: 98.6280% ( 1) 00:19:43.427 10.856 - 10.904: 98.6437% ( 2) 00:19:43.427 10.904 - 10.951: 98.6516% ( 1) 00:19:43.427 10.951 - 10.999: 98.6595% ( 1) 00:19:43.427 10.999 - 11.046: 98.6674% ( 1) 00:19:43.427 11.046 - 11.093: 98.6753% ( 1) 00:19:43.427 11.093 - 11.141: 98.6832% ( 1) 00:19:43.427 11.141 - 11.188: 98.6911% ( 1) 00:19:43.427 11.188 - 11.236: 98.6989% ( 1) 00:19:43.427 11.520 - 11.567: 98.7068% ( 1) 00:19:43.427 11.757 - 11.804: 98.7147% ( 1) 00:19:43.427 11.804 - 11.852: 98.7226% ( 1) 00:19:43.427 11.852 - 11.899: 98.7305% ( 1) 00:19:43.427 11.994 - 12.041: 98.7384% ( 1) 00:19:43.427 12.041 - 12.089: 98.7463% ( 1) 00:19:43.427 12.136 - 12.231: 98.7541% ( 1) 00:19:43.427 12.705 - 12.800: 98.7620% ( 1) 00:19:43.427 12.800 - 12.895: 98.7857% ( 3) 00:19:43.427 12.990 - 13.084: 98.8015% ( 2) 00:19:43.427 13.274 - 13.369: 98.8251% ( 3) 00:19:43.427 13.464 - 13.559: 98.8330% ( 1) 00:19:43.427 13.559 - 13.653: 98.8409% ( 1) 00:19:43.427 13.653 - 13.748: 98.8645% ( 3) 00:19:43.427 13.938 - 14.033: 98.8724% ( 1) 00:19:43.427 14.033 - 14.127: 98.8882% ( 2) 00:19:43.427 14.127 - 14.222: 98.8961% ( 1) 00:19:43.427 14.222 - 14.317: 98.9197% ( 3) 00:19:43.427 14.412 - 14.507: 98.9276% ( 1) 00:19:43.427 14.601 - 14.696: 98.9355% ( 1) 00:19:43.427 14.791 - 14.886: 98.9434% ( 1) 00:19:43.427 15.170 - 15.265: 98.9513% ( 1) 00:19:43.427 16.972 - 17.067: 98.9670% ( 2) 00:19:43.427 17.067 - 17.161: 98.9749% ( 1) 00:19:43.427 17.161 - 17.256: 98.9828% ( 1) 00:19:43.427 17.256 - 17.351: 99.0144% ( 4) 00:19:43.427 17.351 - 17.446: 99.0380% ( 3) 00:19:43.427 17.446 - 17.541: 99.0617% ( 3) 00:19:43.427 17.541 - 17.636: 99.1011% ( 5) 00:19:43.427 17.636 - 17.730: 99.1247% ( 3) 00:19:43.427 17.730 - 17.825: 99.1957% ( 9) 00:19:43.427 17.825 - 17.920: 99.2430% ( 6) 00:19:43.427 17.920 - 18.015: 99.3061% ( 8) 00:19:43.427 18.015 - 18.110: 99.4007% ( 12) 00:19:43.427 18.110 - 18.204: 99.4165% ( 2) 00:19:43.427 18.204 - 18.299: 99.4480% ( 4) 00:19:43.427 18.299 - 18.394: 99.5111% ( 8) 00:19:43.427 18.394 - 18.489: 99.6136% ( 13) 00:19:43.427 18.489 - 18.584: 99.6846% ( 9) 00:19:43.427 18.584 - 18.679: 99.7240% ( 5) 00:19:43.427 18.679 - 18.773: 99.7634% ( 5) 00:19:43.427 18.773 - 18.868: 99.7713% ( 1) 00:19:43.427 18.868 - 18.963: 99.7792% ( 1) 00:19:43.427 18.963 - 19.058: 99.7950% ( 2) 00:19:43.427 19.058 - 19.153: 99.8029% ( 1) 00:19:43.427 20.196 - 20.290: 99.8108% ( 1) 00:19:43.427 20.859 - 20.954: 99.8186% ( 1) 00:19:43.427 22.471 - 22.566: 99.8265% ( 1) 00:19:43.427 22.756 - 22.850: 99.8344% ( 1) 00:19:43.427 23.609 - 23.704: 99.8423% ( 1) 00:19:43.427 23.893 - 23.988: 99.8502% ( 1) 00:19:43.427 24.462 - 24.652: 99.8581% ( 1) 00:19:43.427 24.652 - 24.841: 99.8660% ( 1) 00:19:43.427 25.221 - 25.410: 99.8738% ( 1) 00:19:43.427 27.686 - 27.876: 99.8817% ( 1) 00:19:43.427 29.013 - 29.203: 99.8896% ( 1) 00:19:43.427 3980.705 - 4004.978: 99.9763% ( 11) 00:19:43.427 4004.978 - 4029.250: 100.0000% ( 3) 00:19:43.427 00:19:43.427 Complete histogram 00:19:43.427 ================== 00:19:43.427 Range in us Cumulative Count 00:19:43.427 2.074 - 2.086: 10.1561% ( 1288) 00:19:43.427 2.086 - 2.098: 36.1851% ( 3301) 00:19:43.427 2.098 - 2.110: 39.1815% ( 380) 00:19:43.427 2.110 - 2.121: 52.1605% ( 1646) 00:19:43.427 2.121 - 2.133: 62.2851% ( 1284) 00:19:43.427 2.133 - 2.145: 64.3116% ( 257) 00:19:43.427 2.145 - 2.157: 71.7316% ( 941) 00:19:43.427 2.157 - 2.169: 79.2146% ( 949) 00:19:43.427 2.169 - 2.181: 80.3816% ( 148) 00:19:43.427 2.181 - 2.193: 85.5543% ( 656) 00:19:43.427 2.193 - 2.204: 89.6073% ( 514) 00:19:43.427 2.204 - 2.216: 90.4037% ( 101) 00:19:43.427 2.216 - 2.228: 91.3263% ( 117) 00:19:43.427 2.228 - 2.240: 92.1700% ( 107) 00:19:43.427 2.240 - 2.252: 93.9599% ( 227) 00:19:43.427 2.252 - 2.264: 94.4015% ( 56) 00:19:43.427 2.264 - 2.276: 94.6854% ( 36) 00:19:43.427 2.276 - 2.287: 95.0008% ( 40) 00:19:43.427 2.287 - 2.299: 95.1506% ( 19) 00:19:43.427 2.299 - 2.311: 95.4424% ( 37) 00:19:43.427 2.311 - 2.323: 95.6789% ( 30) 00:19:43.427 2.323 - 2.335: 95.7657% ( 11) 00:19:43.427 2.335 - 2.347: 95.7972% ( 4) 00:19:43.427 2.347 - 2.359: 95.8366% ( 5) 00:19:43.427 2.359 - 2.370: 95.8445% ( 1) 00:19:43.427 2.370 - 2.382: 95.8760% ( 4) 00:19:43.427 2.382 - 2.394: 95.9076% ( 4) 00:19:43.427 2.394 - 2.406: 96.0101% ( 13) 00:19:43.427 2.406 - 2.418: 96.0574% ( 6) 00:19:43.427 2.418 - 2.430: 96.2309% ( 22) 00:19:43.427 2.430 - 2.441: 96.4517% ( 28) 00:19:43.427 2.441 - 2.453: 96.6015% ( 19) 00:19:43.427 2.453 - 2.465: 96.7986% ( 25) 00:19:43.427 2.465 - 2.477: 97.0273% ( 29) 00:19:43.427 2.477 - 2.489: 97.2481% ( 28) 00:19:43.427 2.489 - 2.501: 97.4294% ( 23) 00:19:43.427 2.501 - 2.513: 97.6739% ( 31) 00:19:43.427 2.513 - 2.524: 97.7606% ( 11) 00:19:43.427 2.524 - 2.536: 97.9262% ( 21) 00:19:43.427 2.536 - 2.548: 97.9814% ( 7) 00:19:43.427 2.548 - 2.560: 98.0524% ( 9) 00:19:43.427 2.560 - 2.572: 98.0997% ( 6) 00:19:43.427 2.572 - 2.584: 98.1391% ( 5) 00:19:43.427 2.584 - 2.596: 98.1706% ( 4) 00:19:43.427 2.596 - 2.607: 98.1785% ( 1) 00:19:43.427 2.607 - 2.619: 98.2022% ( 3) 00:19:43.427 2.619 - 2.631: 9[2024-10-08 20:47:11.789689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:43.427 8.2495% ( 6) 00:19:43.427 2.631 - 2.643: 98.2653% ( 2) 00:19:43.427 2.643 - 2.655: 98.2810% ( 2) 00:19:43.427 2.667 - 2.679: 98.2968% ( 2) 00:19:43.427 2.679 - 2.690: 98.3047% ( 1) 00:19:43.427 2.702 - 2.714: 98.3126% ( 1) 00:19:43.427 2.714 - 2.726: 98.3205% ( 1) 00:19:43.427 2.761 - 2.773: 98.3362% ( 2) 00:19:43.427 2.773 - 2.785: 98.3441% ( 1) 00:19:43.427 2.797 - 2.809: 98.3520% ( 1) 00:19:43.427 2.809 - 2.821: 98.3599% ( 1) 00:19:43.427 2.844 - 2.856: 98.3678% ( 1) 00:19:43.427 2.880 - 2.892: 98.3757% ( 1) 00:19:43.427 2.892 - 2.904: 98.3835% ( 1) 00:19:43.427 2.927 - 2.939: 98.3993% ( 2) 00:19:43.427 2.975 - 2.987: 98.4072% ( 1) 00:19:43.427 2.999 - 3.010: 98.4151% ( 1) 00:19:43.427 3.034 - 3.058: 98.4230% ( 1) 00:19:43.427 3.129 - 3.153: 98.4308% ( 1) 00:19:43.427 3.319 - 3.342: 98.4387% ( 1) 00:19:43.427 3.366 - 3.390: 98.4466% ( 1) 00:19:43.427 3.390 - 3.413: 98.4545% ( 1) 00:19:43.427 3.650 - 3.674: 98.4624% ( 1) 00:19:43.427 3.674 - 3.698: 98.4782% ( 2) 00:19:43.427 3.698 - 3.721: 98.4939% ( 2) 00:19:43.427 3.793 - 3.816: 98.5097% ( 2) 00:19:43.427 3.816 - 3.840: 98.5255% ( 2) 00:19:43.427 3.887 - 3.911: 98.5334% ( 1) 00:19:43.427 3.911 - 3.935: 98.5570% ( 3) 00:19:43.427 3.935 - 3.959: 98.5728% ( 2) 00:19:43.427 3.959 - 3.982: 98.5964% ( 3) 00:19:43.427 3.982 - 4.006: 98.6043% ( 1) 00:19:43.427 4.030 - 4.053: 98.6122% ( 1) 00:19:43.427 4.077 - 4.101: 98.6280% ( 2) 00:19:43.427 4.101 - 4.124: 98.6359% ( 1) 00:19:43.427 4.124 - 4.148: 98.6437% ( 1) 00:19:43.427 4.148 - 4.172: 98.6516% ( 1) 00:19:43.427 4.219 - 4.243: 98.6674% ( 2) 00:19:43.427 4.290 - 4.314: 98.6753% ( 1) 00:19:43.427 4.314 - 4.338: 98.6832% ( 1) 00:19:43.427 4.361 - 4.385: 98.6911% ( 1) 00:19:43.427 6.590 - 6.637: 98.6989% ( 1) 00:19:43.427 6.921 - 6.969: 98.7068% ( 1) 00:19:43.427 7.159 - 7.206: 98.7147% ( 1) 00:19:43.427 7.206 - 7.253: 98.7305% ( 2) 00:19:43.427 7.538 - 7.585: 98.7541% ( 3) 00:19:43.427 7.585 - 7.633: 98.7620% ( 1) 00:19:43.427 7.822 - 7.870: 98.7699% ( 1) 00:19:43.427 7.964 - 8.012: 98.7778% ( 1) 00:19:43.427 8.059 - 8.107: 98.7857% ( 1) 00:19:43.427 8.249 - 8.296: 98.7936% ( 1) 00:19:43.427 8.344 - 8.391: 98.8015% ( 1) 00:19:43.427 8.676 - 8.723: 98.8172% ( 2) 00:19:43.427 9.624 - 9.671: 98.8251% ( 1) 00:19:43.427 11.046 - 11.093: 98.8330% ( 1) 00:19:43.427 11.520 - 11.567: 98.8409% ( 1) 00:19:43.427 12.231 - 12.326: 98.8488% ( 1) 00:19:43.427 12.610 - 12.705: 98.8566% ( 1) 00:19:43.427 14.033 - 14.127: 98.8645% ( 1) 00:19:43.427 14.507 - 14.601: 98.8724% ( 1) 00:19:43.427 15.360 - 15.455: 98.8803% ( 1) 00:19:43.427 15.739 - 15.834: 98.9040% ( 3) 00:19:43.427 15.834 - 15.929: 98.9355% ( 4) 00:19:43.427 15.929 - 16.024: 98.9592% ( 3) 00:19:43.427 16.024 - 16.119: 99.0065% ( 6) 00:19:43.427 16.119 - 16.213: 99.0301% ( 3) 00:19:43.427 16.213 - 16.308: 99.0695% ( 5) 00:19:43.427 16.308 - 16.403: 99.1169% ( 6) 00:19:43.427 16.403 - 16.498: 99.1326% ( 2) 00:19:43.427 16.498 - 16.593: 99.1563% ( 3) 00:19:43.427 16.593 - 16.687: 99.2036% ( 6) 00:19:43.427 16.687 - 16.782: 99.2430% ( 5) 00:19:43.427 16.782 - 16.877: 99.3140% ( 9) 00:19:43.427 16.877 - 16.972: 99.3219% ( 1) 00:19:43.427 17.067 - 17.161: 99.3298% ( 1) 00:19:43.427 17.161 - 17.256: 99.3376% ( 1) 00:19:43.427 17.920 - 18.015: 99.3455% ( 1) 00:19:43.427 18.204 - 18.299: 99.3534% ( 1) 00:19:43.427 3021.938 - 3034.074: 99.3613% ( 1) 00:19:43.427 3034.074 - 3046.210: 99.3692% ( 1) 00:19:43.427 3980.705 - 4004.978: 99.8738% ( 64) 00:19:43.427 4004.978 - 4029.250: 99.9842% ( 14) 00:19:43.427 4975.881 - 5000.154: 100.0000% ( 2) 00:19:43.427 00:19:43.427 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:43.427 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:43.427 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:43.427 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:43.427 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:43.686 [ 00:19:43.686 { 00:19:43.686 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:43.686 "subtype": "Discovery", 00:19:43.686 "listen_addresses": [], 00:19:43.686 "allow_any_host": true, 00:19:43.686 "hosts": [] 00:19:43.686 }, 00:19:43.686 { 00:19:43.686 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:43.686 "subtype": "NVMe", 00:19:43.686 "listen_addresses": [ 00:19:43.686 { 00:19:43.686 "trtype": "VFIOUSER", 00:19:43.686 "adrfam": "IPv4", 00:19:43.686 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:43.686 "trsvcid": "0" 00:19:43.686 } 00:19:43.686 ], 00:19:43.686 "allow_any_host": true, 00:19:43.686 "hosts": [], 00:19:43.686 "serial_number": "SPDK1", 00:19:43.686 "model_number": "SPDK bdev Controller", 00:19:43.686 "max_namespaces": 32, 00:19:43.686 "min_cntlid": 1, 00:19:43.686 "max_cntlid": 65519, 00:19:43.686 "namespaces": [ 00:19:43.686 { 00:19:43.686 "nsid": 1, 00:19:43.686 "bdev_name": "Malloc1", 00:19:43.686 "name": "Malloc1", 00:19:43.686 "nguid": "7FE534E809334FBF96710DB60A68D6B7", 00:19:43.686 "uuid": "7fe534e8-0933-4fbf-9671-0db60a68d6b7" 00:19:43.686 }, 00:19:43.686 { 00:19:43.686 "nsid": 2, 00:19:43.686 "bdev_name": "Malloc3", 00:19:43.686 "name": "Malloc3", 00:19:43.686 "nguid": "45D6924D39E74437B2649B4DC29F34D6", 00:19:43.686 "uuid": "45d6924d-39e7-4437-b264-9b4dc29f34d6" 00:19:43.686 } 00:19:43.686 ] 00:19:43.686 }, 00:19:43.686 { 00:19:43.686 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:43.686 "subtype": "NVMe", 00:19:43.686 "listen_addresses": [ 00:19:43.686 { 00:19:43.686 "trtype": "VFIOUSER", 00:19:43.686 "adrfam": "IPv4", 00:19:43.686 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:43.686 "trsvcid": "0" 00:19:43.686 } 00:19:43.686 ], 00:19:43.686 "allow_any_host": true, 00:19:43.686 "hosts": [], 00:19:43.686 "serial_number": "SPDK2", 00:19:43.686 "model_number": "SPDK bdev Controller", 00:19:43.686 "max_namespaces": 32, 00:19:43.686 "min_cntlid": 1, 00:19:43.686 "max_cntlid": 65519, 00:19:43.686 "namespaces": [ 00:19:43.686 { 00:19:43.686 "nsid": 1, 00:19:43.686 "bdev_name": "Malloc2", 00:19:43.686 "name": "Malloc2", 00:19:43.686 "nguid": "23A686104F4D4E3B877B207AF380A87A", 00:19:43.686 "uuid": "23a68610-4f4d-4e3b-877b-207af380a87a" 00:19:43.686 } 00:19:43.686 ] 00:19:43.686 } 00:19:43.686 ] 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1703220 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:43.686 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:43.686 [2024-10-08 20:47:12.381152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:44.252 Malloc4 00:19:44.252 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:44.818 [2024-10-08 20:47:13.350722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:44.818 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:44.818 Asynchronous Event Request test 00:19:44.818 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.818 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.818 Registering asynchronous event callbacks... 00:19:44.818 Starting namespace attribute notice tests for all controllers... 00:19:44.818 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:44.818 aer_cb - Changed Namespace 00:19:44.818 Cleaning up... 00:19:45.076 [ 00:19:45.076 { 00:19:45.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:45.076 "subtype": "Discovery", 00:19:45.076 "listen_addresses": [], 00:19:45.076 "allow_any_host": true, 00:19:45.076 "hosts": [] 00:19:45.076 }, 00:19:45.076 { 00:19:45.076 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:45.076 "subtype": "NVMe", 00:19:45.076 "listen_addresses": [ 00:19:45.076 { 00:19:45.076 "trtype": "VFIOUSER", 00:19:45.076 "adrfam": "IPv4", 00:19:45.076 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:45.076 "trsvcid": "0" 00:19:45.076 } 00:19:45.076 ], 00:19:45.076 "allow_any_host": true, 00:19:45.076 "hosts": [], 00:19:45.076 "serial_number": "SPDK1", 00:19:45.076 "model_number": "SPDK bdev Controller", 00:19:45.076 "max_namespaces": 32, 00:19:45.076 "min_cntlid": 1, 00:19:45.076 "max_cntlid": 65519, 00:19:45.076 "namespaces": [ 00:19:45.076 { 00:19:45.076 "nsid": 1, 00:19:45.076 "bdev_name": "Malloc1", 00:19:45.076 "name": "Malloc1", 00:19:45.076 "nguid": "7FE534E809334FBF96710DB60A68D6B7", 00:19:45.076 "uuid": "7fe534e8-0933-4fbf-9671-0db60a68d6b7" 00:19:45.076 }, 00:19:45.076 { 00:19:45.076 "nsid": 2, 00:19:45.076 "bdev_name": "Malloc3", 00:19:45.076 "name": "Malloc3", 00:19:45.076 "nguid": "45D6924D39E74437B2649B4DC29F34D6", 00:19:45.076 "uuid": "45d6924d-39e7-4437-b264-9b4dc29f34d6" 00:19:45.076 } 00:19:45.076 ] 00:19:45.076 }, 00:19:45.076 { 00:19:45.076 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:45.076 "subtype": "NVMe", 00:19:45.076 "listen_addresses": [ 00:19:45.076 { 00:19:45.076 "trtype": "VFIOUSER", 00:19:45.076 "adrfam": "IPv4", 00:19:45.076 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:45.076 "trsvcid": "0" 00:19:45.076 } 00:19:45.076 ], 00:19:45.076 "allow_any_host": true, 00:19:45.076 "hosts": [], 00:19:45.076 "serial_number": "SPDK2", 00:19:45.076 "model_number": "SPDK bdev Controller", 00:19:45.076 "max_namespaces": 32, 00:19:45.076 "min_cntlid": 1, 00:19:45.076 "max_cntlid": 65519, 00:19:45.076 "namespaces": [ 00:19:45.076 { 00:19:45.076 "nsid": 1, 00:19:45.076 "bdev_name": "Malloc2", 00:19:45.076 "name": "Malloc2", 00:19:45.076 "nguid": "23A686104F4D4E3B877B207AF380A87A", 00:19:45.076 "uuid": "23a68610-4f4d-4e3b-877b-207af380a87a" 00:19:45.076 }, 00:19:45.076 { 00:19:45.076 "nsid": 2, 00:19:45.076 "bdev_name": "Malloc4", 00:19:45.076 "name": "Malloc4", 00:19:45.076 "nguid": "402E2620C9F8432E8DBFDB595B992F5C", 00:19:45.076 "uuid": "402e2620-c9f8-432e-8dbf-db595b992f5c" 00:19:45.076 } 00:19:45.076 ] 00:19:45.076 } 00:19:45.076 ] 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1703220 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1697528 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1697528 ']' 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1697528 00:19:45.076 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697528 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697528' 00:19:45.334 killing process with pid 1697528 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1697528 00:19:45.334 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1697528 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1703490 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1703490' 00:19:45.902 Process pid: 1703490 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1703490 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1703490 ']' 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.902 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:45.902 [2024-10-08 20:47:14.457051] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:45.902 [2024-10-08 20:47:14.458305] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:45.902 [2024-10-08 20:47:14.458385] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.902 [2024-10-08 20:47:14.567976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.161 [2024-10-08 20:47:14.779185] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.161 [2024-10-08 20:47:14.779297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.161 [2024-10-08 20:47:14.779334] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.161 [2024-10-08 20:47:14.779364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.161 [2024-10-08 20:47:14.779389] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.161 [2024-10-08 20:47:14.783070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.161 [2024-10-08 20:47:14.783194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.161 [2024-10-08 20:47:14.783295] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.161 [2024-10-08 20:47:14.783299] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.420 [2024-10-08 20:47:14.965988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:46.420 [2024-10-08 20:47:14.966493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:46.420 [2024-10-08 20:47:14.966801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:46.420 [2024-10-08 20:47:14.967787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:46.420 [2024-10-08 20:47:14.968237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:46.986 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.986 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:46.986 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:47.922 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:48.180 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:48.180 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:48.438 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:48.439 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:48.439 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:49.005 Malloc1 00:19:49.005 20:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:49.264 20:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:49.528 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:50.095 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:50.095 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:50.095 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:50.353 Malloc2 00:19:50.353 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:50.611 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:50.869 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1703490 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1703490 ']' 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1703490 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.435 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1703490 00:19:51.435 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.435 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.435 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1703490' 00:19:51.435 killing process with pid 1703490 00:19:51.435 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1703490 00:19:51.435 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1703490 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:52.003 00:19:52.003 real 0m58.945s 00:19:52.003 user 3m44.010s 00:19:52.003 sys 0m5.144s 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:52.003 ************************************ 00:19:52.003 END TEST nvmf_vfio_user 00:19:52.003 ************************************ 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.003 ************************************ 00:19:52.003 START TEST nvmf_vfio_user_nvme_compliance 00:19:52.003 ************************************ 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:52.003 * Looking for test storage... 00:19:52.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.003 --rc genhtml_branch_coverage=1 00:19:52.003 --rc genhtml_function_coverage=1 00:19:52.003 --rc genhtml_legend=1 00:19:52.003 --rc geninfo_all_blocks=1 00:19:52.003 --rc geninfo_unexecuted_blocks=1 00:19:52.003 00:19:52.003 ' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.003 --rc genhtml_branch_coverage=1 00:19:52.003 --rc genhtml_function_coverage=1 00:19:52.003 --rc genhtml_legend=1 00:19:52.003 --rc geninfo_all_blocks=1 00:19:52.003 --rc geninfo_unexecuted_blocks=1 00:19:52.003 00:19:52.003 ' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.003 --rc genhtml_branch_coverage=1 00:19:52.003 --rc genhtml_function_coverage=1 00:19:52.003 --rc genhtml_legend=1 00:19:52.003 --rc geninfo_all_blocks=1 00:19:52.003 --rc geninfo_unexecuted_blocks=1 00:19:52.003 00:19:52.003 ' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.003 --rc genhtml_branch_coverage=1 00:19:52.003 --rc genhtml_function_coverage=1 00:19:52.003 --rc genhtml_legend=1 00:19:52.003 --rc geninfo_all_blocks=1 00:19:52.003 --rc geninfo_unexecuted_blocks=1 00:19:52.003 00:19:52.003 ' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.003 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1704241 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1704241' 00:19:52.004 Process pid: 1704241 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1704241 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1704241 ']' 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.004 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.262 [2024-10-08 20:47:20.806356] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:52.262 [2024-10-08 20:47:20.806452] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.262 [2024-10-08 20:47:20.875441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.262 [2024-10-08 20:47:21.011236] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.262 [2024-10-08 20:47:21.011285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.262 [2024-10-08 20:47:21.011302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.262 [2024-10-08 20:47:21.011316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.262 [2024-10-08 20:47:21.011328] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.262 [2024-10-08 20:47:21.012428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.262 [2024-10-08 20:47:21.012493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.262 [2024-10-08 20:47:21.012496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.830 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.830 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:52.830 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 malloc0 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.766 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:54.026 00:19:54.026 00:19:54.026 CUnit - A unit testing framework for C - Version 2.1-3 00:19:54.026 http://cunit.sourceforge.net/ 00:19:54.026 00:19:54.026 00:19:54.026 Suite: nvme_compliance 00:19:54.026 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 20:47:22.728696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.026 [2024-10-08 20:47:22.730553] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:54.026 [2024-10-08 20:47:22.730617] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:54.026 [2024-10-08 20:47:22.730669] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:54.026 [2024-10-08 20:47:22.732772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.284 passed 00:19:54.284 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 20:47:22.864139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.284 [2024-10-08 20:47:22.870225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.284 passed 00:19:54.284 Test: admin_identify_ns ...[2024-10-08 20:47:23.009348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.542 [2024-10-08 20:47:23.068708] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:54.542 [2024-10-08 20:47:23.076705] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:54.542 [2024-10-08 20:47:23.097858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.542 passed 00:19:54.542 Test: admin_get_features_mandatory_features ...[2024-10-08 20:47:23.230719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.542 [2024-10-08 20:47:23.235750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.542 passed 00:19:54.800 Test: admin_get_features_optional_features ...[2024-10-08 20:47:23.366037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.800 [2024-10-08 20:47:23.369070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.800 passed 00:19:54.800 Test: admin_set_features_number_of_queues ...[2024-10-08 20:47:23.502317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.058 [2024-10-08 20:47:23.606829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.058 passed 00:19:55.058 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 20:47:23.739749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.058 [2024-10-08 20:47:23.742778] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.058 passed 00:19:55.316 Test: admin_get_log_page_with_lpo ...[2024-10-08 20:47:23.874761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.316 [2024-10-08 20:47:23.943707] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:55.316 [2024-10-08 20:47:23.956790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.316 passed 00:19:55.573 Test: fabric_property_get ...[2024-10-08 20:47:24.089898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.573 [2024-10-08 20:47:24.091726] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:55.573 [2024-10-08 20:47:24.092989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.573 passed 00:19:55.573 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 20:47:24.226379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.573 [2024-10-08 20:47:24.227907] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:55.573 [2024-10-08 20:47:24.229419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.573 passed 00:19:55.831 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 20:47:24.363541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.831 [2024-10-08 20:47:24.448671] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:55.831 [2024-10-08 20:47:24.464694] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:55.831 [2024-10-08 20:47:24.469820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.831 passed 00:19:56.089 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 20:47:24.599103] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.089 [2024-10-08 20:47:24.600789] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:56.089 [2024-10-08 20:47:24.602169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.089 passed 00:19:56.089 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 20:47:24.736706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.089 [2024-10-08 20:47:24.811699] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:56.089 [2024-10-08 20:47:24.835676] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:56.089 [2024-10-08 20:47:24.840823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.347 passed 00:19:56.347 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 20:47:24.975715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.347 [2024-10-08 20:47:24.977247] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:56.347 [2024-10-08 20:47:24.977352] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:56.347 [2024-10-08 20:47:24.978744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.347 passed 00:19:56.605 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 20:47:25.111571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.605 [2024-10-08 20:47:25.203697] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:56.605 [2024-10-08 20:47:25.211698] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:56.605 [2024-10-08 20:47:25.219698] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:56.605 [2024-10-08 20:47:25.227699] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:56.605 [2024-10-08 20:47:25.256813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.605 passed 00:19:56.863 Test: admin_create_io_sq_verify_pc ...[2024-10-08 20:47:25.388754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.863 [2024-10-08 20:47:25.405726] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:56.863 [2024-10-08 20:47:25.423312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.863 passed 00:19:56.863 Test: admin_create_io_qp_max_qps ...[2024-10-08 20:47:25.557698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.237 [2024-10-08 20:47:26.667690] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:58.495 [2024-10-08 20:47:27.051579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.495 passed 00:19:58.495 Test: admin_create_io_sq_shared_cq ...[2024-10-08 20:47:27.181778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:58.753 [2024-10-08 20:47:27.315681] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:58.753 [2024-10-08 20:47:27.352794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.753 passed 00:19:58.753 00:19:58.753 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.753 suites 1 1 n/a 0 0 00:19:58.753 tests 18 18 18 0 0 00:19:58.753 asserts 360 360 360 0 n/a 00:19:58.753 00:19:58.753 Elapsed time = 2.014 seconds 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1704241 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1704241 ']' 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1704241 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1704241 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1704241' 00:19:58.753 killing process with pid 1704241 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1704241 00:19:58.753 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1704241 00:19:59.319 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:59.319 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:59.319 00:19:59.319 real 0m7.423s 00:19:59.319 user 0m20.662s 00:19:59.319 sys 0m0.758s 00:19:59.319 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:59.319 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:59.319 ************************************ 00:19:59.320 END TEST nvmf_vfio_user_nvme_compliance 00:19:59.320 ************************************ 00:19:59.320 20:47:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:59.320 20:47:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:59.320 20:47:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:59.320 20:47:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.320 ************************************ 00:19:59.320 START TEST nvmf_vfio_user_fuzz 00:19:59.320 ************************************ 00:19:59.320 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:59.579 * Looking for test storage... 00:19:59.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:59.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.579 --rc genhtml_branch_coverage=1 00:19:59.579 --rc genhtml_function_coverage=1 00:19:59.579 --rc genhtml_legend=1 00:19:59.579 --rc geninfo_all_blocks=1 00:19:59.579 --rc geninfo_unexecuted_blocks=1 00:19:59.579 00:19:59.579 ' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:59.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.579 --rc genhtml_branch_coverage=1 00:19:59.579 --rc genhtml_function_coverage=1 00:19:59.579 --rc genhtml_legend=1 00:19:59.579 --rc geninfo_all_blocks=1 00:19:59.579 --rc geninfo_unexecuted_blocks=1 00:19:59.579 00:19:59.579 ' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:59.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.579 --rc genhtml_branch_coverage=1 00:19:59.579 --rc genhtml_function_coverage=1 00:19:59.579 --rc genhtml_legend=1 00:19:59.579 --rc geninfo_all_blocks=1 00:19:59.579 --rc geninfo_unexecuted_blocks=1 00:19:59.579 00:19:59.579 ' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:59.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.579 --rc genhtml_branch_coverage=1 00:19:59.579 --rc genhtml_function_coverage=1 00:19:59.579 --rc genhtml_legend=1 00:19:59.579 --rc geninfo_all_blocks=1 00:19:59.579 --rc geninfo_unexecuted_blocks=1 00:19:59.579 00:19:59.579 ' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.579 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1705215 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1705215' 00:19:59.580 Process pid: 1705215 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1705215 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1705215 ']' 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.580 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.513 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.513 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:00.513 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.488 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 malloc0 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:01.488 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:33.576 Fuzzing completed. Shutting down the fuzz application 00:20:33.576 00:20:33.576 Dumping successful admin opcodes: 00:20:33.576 8, 9, 10, 24, 00:20:33.576 Dumping successful io opcodes: 00:20:33.576 0, 00:20:33.576 NS: 0x200003a1ef00 I/O qp, Total commands completed: 264361, total successful commands: 1045, random_seed: 336921216 00:20:33.576 NS: 0x200003a1ef00 admin qp, Total commands completed: 33780, total successful commands: 280, random_seed: 4094508416 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1705215 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1705215 ']' 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1705215 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705215 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.576 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.577 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705215' 00:20:33.577 killing process with pid 1705215 00:20:33.577 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1705215 00:20:33.577 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1705215 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:33.577 00:20:33.577 real 0m33.360s 00:20:33.577 user 0m34.147s 00:20:33.577 sys 0m24.000s 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:33.577 ************************************ 00:20:33.577 END TEST nvmf_vfio_user_fuzz 00:20:33.577 ************************************ 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.577 ************************************ 00:20:33.577 START TEST nvmf_auth_target 00:20:33.577 ************************************ 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:33.577 * Looking for test storage... 00:20:33.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.577 --rc genhtml_branch_coverage=1 00:20:33.577 --rc genhtml_function_coverage=1 00:20:33.577 --rc genhtml_legend=1 00:20:33.577 --rc geninfo_all_blocks=1 00:20:33.577 --rc geninfo_unexecuted_blocks=1 00:20:33.577 00:20:33.577 ' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.577 --rc genhtml_branch_coverage=1 00:20:33.577 --rc genhtml_function_coverage=1 00:20:33.577 --rc genhtml_legend=1 00:20:33.577 --rc geninfo_all_blocks=1 00:20:33.577 --rc geninfo_unexecuted_blocks=1 00:20:33.577 00:20:33.577 ' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.577 --rc genhtml_branch_coverage=1 00:20:33.577 --rc genhtml_function_coverage=1 00:20:33.577 --rc genhtml_legend=1 00:20:33.577 --rc geninfo_all_blocks=1 00:20:33.577 --rc geninfo_unexecuted_blocks=1 00:20:33.577 00:20:33.577 ' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.577 --rc genhtml_branch_coverage=1 00:20:33.577 --rc genhtml_function_coverage=1 00:20:33.577 --rc genhtml_legend=1 00:20:33.577 --rc geninfo_all_blocks=1 00:20:33.577 --rc geninfo_unexecuted_blocks=1 00:20:33.577 00:20:33.577 ' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.577 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:33.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:33.578 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.112 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.112 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.112 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.112 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:36.113 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:36.113 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:36.113 Found net devices under 0000:84:00.0: cvl_0_0 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:36.113 Found net devices under 0000:84:00.1: cvl_0_1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:20:36.113 00:20:36.113 --- 10.0.0.2 ping statistics --- 00:20:36.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.113 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:36.113 00:20:36.113 --- 10.0.0.1 ping statistics --- 00:20:36.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.113 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:36.113 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1710800 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1710800 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1710800 ']' 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.114 20:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1710831 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b45f6feef14975b8508b0b0c7b33320775041544ea13527f 00:20:37.048 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.NYo 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b45f6feef14975b8508b0b0c7b33320775041544ea13527f 0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b45f6feef14975b8508b0b0c7b33320775041544ea13527f 0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b45f6feef14975b8508b0b0c7b33320775041544ea13527f 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.NYo 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.NYo 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.NYo 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=07cd72d244005431fbe4cc1bdfd70039c16dc2e44fe2da632e5b5354662cd776 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.4a0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 07cd72d244005431fbe4cc1bdfd70039c16dc2e44fe2da632e5b5354662cd776 3 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 07cd72d244005431fbe4cc1bdfd70039c16dc2e44fe2da632e5b5354662cd776 3 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=07cd72d244005431fbe4cc1bdfd70039c16dc2e44fe2da632e5b5354662cd776 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.4a0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.4a0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4a0 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d6cccb7f9f3cff05ece5f3199a0d56d2 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.A5j 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d6cccb7f9f3cff05ece5f3199a0d56d2 1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d6cccb7f9f3cff05ece5f3199a0d56d2 1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d6cccb7f9f3cff05ece5f3199a0d56d2 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.A5j 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.A5j 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.A5j 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bc73e505b699b090b3c7927cf2bb39d471c4e553868bf7ba 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.M5U 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bc73e505b699b090b3c7927cf2bb39d471c4e553868bf7ba 2 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bc73e505b699b090b3c7927cf2bb39d471c4e553868bf7ba 2 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bc73e505b699b090b3c7927cf2bb39d471c4e553868bf7ba 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.049 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.M5U 00:20:37.315 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.M5U 00:20:37.315 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.M5U 00:20:37.315 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:37.315 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c8a7ab7fe9c26492467b791538f095c2d21ba031183acc73 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.TsI 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c8a7ab7fe9c26492467b791538f095c2d21ba031183acc73 2 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c8a7ab7fe9c26492467b791538f095c2d21ba031183acc73 2 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c8a7ab7fe9c26492467b791538f095c2d21ba031183acc73 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.TsI 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.TsI 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.TsI 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=28e1da932aaa03431b2c5ab61a0c3321 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XC5 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 28e1da932aaa03431b2c5ab61a0c3321 1 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 28e1da932aaa03431b2c5ab61a0c3321 1 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=28e1da932aaa03431b2c5ab61a0c3321 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:37.316 20:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XC5 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XC5 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XC5 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=11799402938c239f96db1f182e60a3bd508adb8210975883f20d46771f133805 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.8Yc 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 11799402938c239f96db1f182e60a3bd508adb8210975883f20d46771f133805 3 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 11799402938c239f96db1f182e60a3bd508adb8210975883f20d46771f133805 3 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=11799402938c239f96db1f182e60a3bd508adb8210975883f20d46771f133805 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:37.316 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.8Yc 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.8Yc 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8Yc 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1710800 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1710800 ']' 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.578 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1710831 /var/tmp/host.sock 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1710831 ']' 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:37.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.836 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NYo 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NYo 00:20:38.095 20:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NYo 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4a0 ]] 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4a0 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4a0 00:20:38.662 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4a0 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A5j 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.A5j 00:20:39.230 20:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.A5j 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.M5U ]] 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M5U 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M5U 00:20:39.800 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M5U 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TsI 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TsI 00:20:40.059 20:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TsI 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XC5 ]] 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XC5 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XC5 00:20:40.626 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XC5 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yc 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yc 00:20:40.885 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yc 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:41.144 20:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.713 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.972 00:20:42.231 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.231 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.231 20:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.489 { 00:20:42.489 "cntlid": 1, 00:20:42.489 "qid": 0, 00:20:42.489 "state": "enabled", 00:20:42.489 "thread": "nvmf_tgt_poll_group_000", 00:20:42.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:42.489 "listen_address": { 00:20:42.489 "trtype": "TCP", 00:20:42.489 "adrfam": "IPv4", 00:20:42.489 "traddr": "10.0.0.2", 00:20:42.489 "trsvcid": "4420" 00:20:42.489 }, 00:20:42.489 "peer_address": { 00:20:42.489 "trtype": "TCP", 00:20:42.489 "adrfam": "IPv4", 00:20:42.489 "traddr": "10.0.0.1", 00:20:42.489 "trsvcid": "56186" 00:20:42.489 }, 00:20:42.489 "auth": { 00:20:42.489 "state": "completed", 00:20:42.489 "digest": "sha256", 00:20:42.489 "dhgroup": "null" 00:20:42.489 } 00:20:42.489 } 00:20:42.489 ]' 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.489 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.748 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.748 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.748 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.748 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.748 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.316 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:20:43.316 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.225 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.161 00:20:46.161 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.161 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.161 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.419 { 00:20:46.419 "cntlid": 3, 00:20:46.419 "qid": 0, 00:20:46.419 "state": "enabled", 00:20:46.419 "thread": "nvmf_tgt_poll_group_000", 00:20:46.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:46.419 "listen_address": { 00:20:46.419 "trtype": "TCP", 00:20:46.419 "adrfam": "IPv4", 00:20:46.419 "traddr": "10.0.0.2", 00:20:46.419 "trsvcid": "4420" 00:20:46.419 }, 00:20:46.419 "peer_address": { 00:20:46.419 "trtype": "TCP", 00:20:46.419 "adrfam": "IPv4", 00:20:46.419 "traddr": "10.0.0.1", 00:20:46.419 "trsvcid": "53554" 00:20:46.419 }, 00:20:46.419 "auth": { 00:20:46.419 "state": "completed", 00:20:46.419 "digest": "sha256", 00:20:46.419 "dhgroup": "null" 00:20:46.419 } 00:20:46.419 } 00:20:46.419 ]' 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.419 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.677 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.677 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.678 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.936 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:20:46.936 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:48.839 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.405 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.340 00:20:50.340 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.340 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.340 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.906 { 00:20:50.906 "cntlid": 5, 00:20:50.906 "qid": 0, 00:20:50.906 "state": "enabled", 00:20:50.906 "thread": "nvmf_tgt_poll_group_000", 00:20:50.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:50.906 "listen_address": { 00:20:50.906 "trtype": "TCP", 00:20:50.906 "adrfam": "IPv4", 00:20:50.906 "traddr": "10.0.0.2", 00:20:50.906 "trsvcid": "4420" 00:20:50.906 }, 00:20:50.906 "peer_address": { 00:20:50.906 "trtype": "TCP", 00:20:50.906 "adrfam": "IPv4", 00:20:50.906 "traddr": "10.0.0.1", 00:20:50.906 "trsvcid": "53590" 00:20:50.906 }, 00:20:50.906 "auth": { 00:20:50.906 "state": "completed", 00:20:50.906 "digest": "sha256", 00:20:50.906 "dhgroup": "null" 00:20:50.906 } 00:20:50.906 } 00:20:50.906 ]' 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.906 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.907 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.474 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:20:51.474 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:53.377 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:53.636 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.637 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.637 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.637 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.637 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.637 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.896 00:20:53.896 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.896 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.896 20:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.465 { 00:20:54.465 "cntlid": 7, 00:20:54.465 "qid": 0, 00:20:54.465 "state": "enabled", 00:20:54.465 "thread": "nvmf_tgt_poll_group_000", 00:20:54.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:54.465 "listen_address": { 00:20:54.465 "trtype": "TCP", 00:20:54.465 "adrfam": "IPv4", 00:20:54.465 "traddr": "10.0.0.2", 00:20:54.465 "trsvcid": "4420" 00:20:54.465 }, 00:20:54.465 "peer_address": { 00:20:54.465 "trtype": "TCP", 00:20:54.465 "adrfam": "IPv4", 00:20:54.465 "traddr": "10.0.0.1", 00:20:54.465 "trsvcid": "53618" 00:20:54.465 }, 00:20:54.465 "auth": { 00:20:54.465 "state": "completed", 00:20:54.465 "digest": "sha256", 00:20:54.465 "dhgroup": "null" 00:20:54.465 } 00:20:54.465 } 00:20:54.465 ]' 00:20:54.465 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.725 20:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.664 20:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:20:55.664 20:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:57.567 20:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.567 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.568 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.568 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.568 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.136 00:20:58.136 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.136 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.136 20:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.704 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.704 { 00:20:58.704 "cntlid": 9, 00:20:58.704 "qid": 0, 00:20:58.704 "state": "enabled", 00:20:58.704 "thread": "nvmf_tgt_poll_group_000", 00:20:58.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:58.705 "listen_address": { 00:20:58.705 "trtype": "TCP", 00:20:58.705 "adrfam": "IPv4", 00:20:58.705 "traddr": "10.0.0.2", 00:20:58.705 "trsvcid": "4420" 00:20:58.705 }, 00:20:58.705 "peer_address": { 00:20:58.705 "trtype": "TCP", 00:20:58.705 "adrfam": "IPv4", 00:20:58.705 "traddr": "10.0.0.1", 00:20:58.705 "trsvcid": "45580" 00:20:58.705 }, 00:20:58.705 "auth": { 00:20:58.705 "state": "completed", 00:20:58.705 "digest": "sha256", 00:20:58.705 "dhgroup": "ffdhe2048" 00:20:58.705 } 00:20:58.705 } 00:20:58.705 ]' 00:20:58.705 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.705 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.705 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.705 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.705 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.964 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.965 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.965 20:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.534 20:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:20:59.534 20:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:01.465 20:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.745 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.004 00:21:02.263 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.263 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.263 20:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.832 { 00:21:02.832 "cntlid": 11, 00:21:02.832 "qid": 0, 00:21:02.832 "state": "enabled", 00:21:02.832 "thread": "nvmf_tgt_poll_group_000", 00:21:02.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:02.832 "listen_address": { 00:21:02.832 "trtype": "TCP", 00:21:02.832 "adrfam": "IPv4", 00:21:02.832 "traddr": "10.0.0.2", 00:21:02.832 "trsvcid": "4420" 00:21:02.832 }, 00:21:02.832 "peer_address": { 00:21:02.832 "trtype": "TCP", 00:21:02.832 "adrfam": "IPv4", 00:21:02.832 "traddr": "10.0.0.1", 00:21:02.832 "trsvcid": "45592" 00:21:02.832 }, 00:21:02.832 "auth": { 00:21:02.832 "state": "completed", 00:21:02.832 "digest": "sha256", 00:21:02.832 "dhgroup": "ffdhe2048" 00:21:02.832 } 00:21:02.832 } 00:21:02.832 ]' 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.832 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.091 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.091 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.091 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.661 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:03.661 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.037 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.605 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.173 00:21:06.431 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.431 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.431 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.690 { 00:21:06.690 "cntlid": 13, 00:21:06.690 "qid": 0, 00:21:06.690 "state": "enabled", 00:21:06.690 "thread": "nvmf_tgt_poll_group_000", 00:21:06.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:06.690 "listen_address": { 00:21:06.690 "trtype": "TCP", 00:21:06.690 "adrfam": "IPv4", 00:21:06.690 "traddr": "10.0.0.2", 00:21:06.690 "trsvcid": "4420" 00:21:06.690 }, 00:21:06.690 "peer_address": { 00:21:06.690 "trtype": "TCP", 00:21:06.690 "adrfam": "IPv4", 00:21:06.690 "traddr": "10.0.0.1", 00:21:06.690 "trsvcid": "58416" 00:21:06.690 }, 00:21:06.690 "auth": { 00:21:06.690 "state": "completed", 00:21:06.690 "digest": "sha256", 00:21:06.690 "dhgroup": "ffdhe2048" 00:21:06.690 } 00:21:06.690 } 00:21:06.690 ]' 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.690 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.949 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.949 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.949 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.207 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:07.207 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.116 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.686 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.687 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.687 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.256 00:21:10.256 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.256 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.256 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.196 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.196 { 00:21:11.196 "cntlid": 15, 00:21:11.196 "qid": 0, 00:21:11.196 "state": "enabled", 00:21:11.196 "thread": "nvmf_tgt_poll_group_000", 00:21:11.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:11.196 "listen_address": { 00:21:11.196 "trtype": "TCP", 00:21:11.196 "adrfam": "IPv4", 00:21:11.196 "traddr": "10.0.0.2", 00:21:11.196 "trsvcid": "4420" 00:21:11.197 }, 00:21:11.197 "peer_address": { 00:21:11.197 "trtype": "TCP", 00:21:11.197 "adrfam": "IPv4", 00:21:11.197 "traddr": "10.0.0.1", 00:21:11.197 "trsvcid": "58434" 00:21:11.197 }, 00:21:11.197 "auth": { 00:21:11.197 "state": "completed", 00:21:11.197 "digest": "sha256", 00:21:11.197 "dhgroup": "ffdhe2048" 00:21:11.197 } 00:21:11.197 } 00:21:11.197 ]' 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.197 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.765 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:11.765 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.673 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.931 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.932 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.932 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.932 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.932 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.932 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.500 00:21:14.500 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.500 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.500 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.759 { 00:21:14.759 "cntlid": 17, 00:21:14.759 "qid": 0, 00:21:14.759 "state": "enabled", 00:21:14.759 "thread": "nvmf_tgt_poll_group_000", 00:21:14.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:14.759 "listen_address": { 00:21:14.759 "trtype": "TCP", 00:21:14.759 "adrfam": "IPv4", 00:21:14.759 "traddr": "10.0.0.2", 00:21:14.759 "trsvcid": "4420" 00:21:14.759 }, 00:21:14.759 "peer_address": { 00:21:14.759 "trtype": "TCP", 00:21:14.759 "adrfam": "IPv4", 00:21:14.759 "traddr": "10.0.0.1", 00:21:14.759 "trsvcid": "58458" 00:21:14.759 }, 00:21:14.759 "auth": { 00:21:14.759 "state": "completed", 00:21:14.759 "digest": "sha256", 00:21:14.759 "dhgroup": "ffdhe3072" 00:21:14.759 } 00:21:14.759 } 00:21:14.759 ]' 00:21:14.759 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.018 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.276 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:15.276 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.818 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.756 00:21:18.756 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.756 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.756 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.016 { 00:21:19.016 "cntlid": 19, 00:21:19.016 "qid": 0, 00:21:19.016 "state": "enabled", 00:21:19.016 "thread": "nvmf_tgt_poll_group_000", 00:21:19.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:19.016 "listen_address": { 00:21:19.016 "trtype": "TCP", 00:21:19.016 "adrfam": "IPv4", 00:21:19.016 "traddr": "10.0.0.2", 00:21:19.016 "trsvcid": "4420" 00:21:19.016 }, 00:21:19.016 "peer_address": { 00:21:19.016 "trtype": "TCP", 00:21:19.016 "adrfam": "IPv4", 00:21:19.016 "traddr": "10.0.0.1", 00:21:19.016 "trsvcid": "34820" 00:21:19.016 }, 00:21:19.016 "auth": { 00:21:19.016 "state": "completed", 00:21:19.016 "digest": "sha256", 00:21:19.016 "dhgroup": "ffdhe3072" 00:21:19.016 } 00:21:19.016 } 00:21:19.016 ]' 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.016 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.276 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.276 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.276 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.276 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.276 20:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.535 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:19.535 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:21.441 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:21.701 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.961 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.962 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.962 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.962 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.962 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.962 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.530 00:21:22.790 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.790 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.790 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.049 { 00:21:23.049 "cntlid": 21, 00:21:23.049 "qid": 0, 00:21:23.049 "state": "enabled", 00:21:23.049 "thread": "nvmf_tgt_poll_group_000", 00:21:23.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:23.049 "listen_address": { 00:21:23.049 "trtype": "TCP", 00:21:23.049 "adrfam": "IPv4", 00:21:23.049 "traddr": "10.0.0.2", 00:21:23.049 "trsvcid": "4420" 00:21:23.049 }, 00:21:23.049 "peer_address": { 00:21:23.049 "trtype": "TCP", 00:21:23.049 "adrfam": "IPv4", 00:21:23.049 "traddr": "10.0.0.1", 00:21:23.049 "trsvcid": "34848" 00:21:23.049 }, 00:21:23.049 "auth": { 00:21:23.049 "state": "completed", 00:21:23.049 "digest": "sha256", 00:21:23.049 "dhgroup": "ffdhe3072" 00:21:23.049 } 00:21:23.049 } 00:21:23.049 ]' 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.049 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.307 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.307 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.307 20:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.567 20:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:23.567 20:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:26.109 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.369 20:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.306 00:21:27.306 20:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.306 20:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.306 20:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.875 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.875 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.875 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.876 { 00:21:27.876 "cntlid": 23, 00:21:27.876 "qid": 0, 00:21:27.876 "state": "enabled", 00:21:27.876 "thread": "nvmf_tgt_poll_group_000", 00:21:27.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:27.876 "listen_address": { 00:21:27.876 "trtype": "TCP", 00:21:27.876 "adrfam": "IPv4", 00:21:27.876 "traddr": "10.0.0.2", 00:21:27.876 "trsvcid": "4420" 00:21:27.876 }, 00:21:27.876 "peer_address": { 00:21:27.876 "trtype": "TCP", 00:21:27.876 "adrfam": "IPv4", 00:21:27.876 "traddr": "10.0.0.1", 00:21:27.876 "trsvcid": "58322" 00:21:27.876 }, 00:21:27.876 "auth": { 00:21:27.876 "state": "completed", 00:21:27.876 "digest": "sha256", 00:21:27.876 "dhgroup": "ffdhe3072" 00:21:27.876 } 00:21:27.876 } 00:21:27.876 ]' 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.876 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.445 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:28.445 20:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:30.357 20:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:30.357 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.927 20:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.496 00:21:31.496 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.496 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.496 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.434 { 00:21:32.434 "cntlid": 25, 00:21:32.434 "qid": 0, 00:21:32.434 "state": "enabled", 00:21:32.434 "thread": "nvmf_tgt_poll_group_000", 00:21:32.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:32.434 "listen_address": { 00:21:32.434 "trtype": "TCP", 00:21:32.434 "adrfam": "IPv4", 00:21:32.434 "traddr": "10.0.0.2", 00:21:32.434 "trsvcid": "4420" 00:21:32.434 }, 00:21:32.434 "peer_address": { 00:21:32.434 "trtype": "TCP", 00:21:32.434 "adrfam": "IPv4", 00:21:32.434 "traddr": "10.0.0.1", 00:21:32.434 "trsvcid": "58354" 00:21:32.434 }, 00:21:32.434 "auth": { 00:21:32.434 "state": "completed", 00:21:32.434 "digest": "sha256", 00:21:32.434 "dhgroup": "ffdhe4096" 00:21:32.434 } 00:21:32.434 } 00:21:32.434 ]' 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.434 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.434 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.434 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.434 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.004 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:33.004 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:34.917 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.918 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.177 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.772 00:21:35.772 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.772 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.772 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.112 { 00:21:36.112 "cntlid": 27, 00:21:36.112 "qid": 0, 00:21:36.112 "state": "enabled", 00:21:36.112 "thread": "nvmf_tgt_poll_group_000", 00:21:36.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:36.112 "listen_address": { 00:21:36.112 "trtype": "TCP", 00:21:36.112 "adrfam": "IPv4", 00:21:36.112 "traddr": "10.0.0.2", 00:21:36.112 "trsvcid": "4420" 00:21:36.112 }, 00:21:36.112 "peer_address": { 00:21:36.112 "trtype": "TCP", 00:21:36.112 "adrfam": "IPv4", 00:21:36.112 "traddr": "10.0.0.1", 00:21:36.112 "trsvcid": "58360" 00:21:36.112 }, 00:21:36.112 "auth": { 00:21:36.112 "state": "completed", 00:21:36.112 "digest": "sha256", 00:21:36.112 "dhgroup": "ffdhe4096" 00:21:36.112 } 00:21:36.112 } 00:21:36.112 ]' 00:21:36.112 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.374 20:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.943 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:36.943 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:38.849 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.109 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.053 00:21:40.053 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.053 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.053 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.621 { 00:21:40.621 "cntlid": 29, 00:21:40.621 "qid": 0, 00:21:40.621 "state": "enabled", 00:21:40.621 "thread": "nvmf_tgt_poll_group_000", 00:21:40.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:40.621 "listen_address": { 00:21:40.621 "trtype": "TCP", 00:21:40.621 "adrfam": "IPv4", 00:21:40.621 "traddr": "10.0.0.2", 00:21:40.621 "trsvcid": "4420" 00:21:40.621 }, 00:21:40.621 "peer_address": { 00:21:40.621 "trtype": "TCP", 00:21:40.621 "adrfam": "IPv4", 00:21:40.621 "traddr": "10.0.0.1", 00:21:40.621 "trsvcid": "60714" 00:21:40.621 }, 00:21:40.621 "auth": { 00:21:40.621 "state": "completed", 00:21:40.621 "digest": "sha256", 00:21:40.621 "dhgroup": "ffdhe4096" 00:21:40.621 } 00:21:40.621 } 00:21:40.621 ]' 00:21:40.621 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.882 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.141 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:41.141 20:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:43.678 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.679 20:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.938 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.876 00:21:44.876 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.876 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.876 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.136 { 00:21:45.136 "cntlid": 31, 00:21:45.136 "qid": 0, 00:21:45.136 "state": "enabled", 00:21:45.136 "thread": "nvmf_tgt_poll_group_000", 00:21:45.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:45.136 "listen_address": { 00:21:45.136 "trtype": "TCP", 00:21:45.136 "adrfam": "IPv4", 00:21:45.136 "traddr": "10.0.0.2", 00:21:45.136 "trsvcid": "4420" 00:21:45.136 }, 00:21:45.136 "peer_address": { 00:21:45.136 "trtype": "TCP", 00:21:45.136 "adrfam": "IPv4", 00:21:45.136 "traddr": "10.0.0.1", 00:21:45.136 "trsvcid": "60754" 00:21:45.136 }, 00:21:45.136 "auth": { 00:21:45.136 "state": "completed", 00:21:45.136 "digest": "sha256", 00:21:45.136 "dhgroup": "ffdhe4096" 00:21:45.136 } 00:21:45.136 } 00:21:45.136 ]' 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.136 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.396 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.396 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.396 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.396 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.396 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.964 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:45.964 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.873 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.132 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.392 20:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.331 00:21:49.331 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.331 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.331 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.591 { 00:21:49.591 "cntlid": 33, 00:21:49.591 "qid": 0, 00:21:49.591 "state": "enabled", 00:21:49.591 "thread": "nvmf_tgt_poll_group_000", 00:21:49.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:49.591 "listen_address": { 00:21:49.591 "trtype": "TCP", 00:21:49.591 "adrfam": "IPv4", 00:21:49.591 "traddr": "10.0.0.2", 00:21:49.591 "trsvcid": "4420" 00:21:49.591 }, 00:21:49.591 "peer_address": { 00:21:49.591 "trtype": "TCP", 00:21:49.591 "adrfam": "IPv4", 00:21:49.591 "traddr": "10.0.0.1", 00:21:49.591 "trsvcid": "47720" 00:21:49.591 }, 00:21:49.591 "auth": { 00:21:49.591 "state": "completed", 00:21:49.591 "digest": "sha256", 00:21:49.591 "dhgroup": "ffdhe6144" 00:21:49.591 } 00:21:49.591 } 00:21:49.591 ]' 00:21:49.591 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.850 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.110 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:50.110 20:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:21:52.014 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.015 20:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.393 00:21:53.393 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.393 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.393 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.960 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.960 { 00:21:53.960 "cntlid": 35, 00:21:53.960 "qid": 0, 00:21:53.960 "state": "enabled", 00:21:53.960 "thread": "nvmf_tgt_poll_group_000", 00:21:53.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:53.960 "listen_address": { 00:21:53.960 "trtype": "TCP", 00:21:53.960 "adrfam": "IPv4", 00:21:53.960 "traddr": "10.0.0.2", 00:21:53.960 "trsvcid": "4420" 00:21:53.960 }, 00:21:53.960 "peer_address": { 00:21:53.960 "trtype": "TCP", 00:21:53.960 "adrfam": "IPv4", 00:21:53.960 "traddr": "10.0.0.1", 00:21:53.960 "trsvcid": "47744" 00:21:53.960 }, 00:21:53.960 "auth": { 00:21:53.960 "state": "completed", 00:21:53.960 "digest": "sha256", 00:21:53.960 "dhgroup": "ffdhe6144" 00:21:53.960 } 00:21:53.960 } 00:21:53.960 ]' 00:21:53.961 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.961 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.961 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.219 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.219 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.219 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.219 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.219 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.478 20:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:54.478 20:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.382 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.382 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:56.382 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.642 20:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.579 00:21:57.579 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.579 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.579 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.839 { 00:21:57.839 "cntlid": 37, 00:21:57.839 "qid": 0, 00:21:57.839 "state": "enabled", 00:21:57.839 "thread": "nvmf_tgt_poll_group_000", 00:21:57.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:57.839 "listen_address": { 00:21:57.839 "trtype": "TCP", 00:21:57.839 "adrfam": "IPv4", 00:21:57.839 "traddr": "10.0.0.2", 00:21:57.839 "trsvcid": "4420" 00:21:57.839 }, 00:21:57.839 "peer_address": { 00:21:57.839 "trtype": "TCP", 00:21:57.839 "adrfam": "IPv4", 00:21:57.839 "traddr": "10.0.0.1", 00:21:57.839 "trsvcid": "55516" 00:21:57.839 }, 00:21:57.839 "auth": { 00:21:57.839 "state": "completed", 00:21:57.839 "digest": "sha256", 00:21:57.839 "dhgroup": "ffdhe6144" 00:21:57.839 } 00:21:57.839 } 00:21:57.839 ]' 00:21:57.839 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.098 20:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.664 20:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:21:58.664 20:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.042 20:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.612 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.993 00:22:01.993 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.993 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.993 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.993 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.993 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.252 { 00:22:02.252 "cntlid": 39, 00:22:02.252 "qid": 0, 00:22:02.252 "state": "enabled", 00:22:02.252 "thread": "nvmf_tgt_poll_group_000", 00:22:02.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:02.252 "listen_address": { 00:22:02.252 "trtype": "TCP", 00:22:02.252 "adrfam": "IPv4", 00:22:02.252 "traddr": "10.0.0.2", 00:22:02.252 "trsvcid": "4420" 00:22:02.252 }, 00:22:02.252 "peer_address": { 00:22:02.252 "trtype": "TCP", 00:22:02.252 "adrfam": "IPv4", 00:22:02.252 "traddr": "10.0.0.1", 00:22:02.252 "trsvcid": "55542" 00:22:02.252 }, 00:22:02.252 "auth": { 00:22:02.252 "state": "completed", 00:22:02.252 "digest": "sha256", 00:22:02.252 "dhgroup": "ffdhe6144" 00:22:02.252 } 00:22:02.252 } 00:22:02.252 ]' 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.252 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.253 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.253 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.253 20:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.822 20:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:02.822 20:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.729 20:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.636 00:22:06.636 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.636 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.636 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.206 { 00:22:07.206 "cntlid": 41, 00:22:07.206 "qid": 0, 00:22:07.206 "state": "enabled", 00:22:07.206 "thread": "nvmf_tgt_poll_group_000", 00:22:07.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:07.206 "listen_address": { 00:22:07.206 "trtype": "TCP", 00:22:07.206 "adrfam": "IPv4", 00:22:07.206 "traddr": "10.0.0.2", 00:22:07.206 "trsvcid": "4420" 00:22:07.206 }, 00:22:07.206 "peer_address": { 00:22:07.206 "trtype": "TCP", 00:22:07.206 "adrfam": "IPv4", 00:22:07.206 "traddr": "10.0.0.1", 00:22:07.206 "trsvcid": "55564" 00:22:07.206 }, 00:22:07.206 "auth": { 00:22:07.206 "state": "completed", 00:22:07.206 "digest": "sha256", 00:22:07.206 "dhgroup": "ffdhe8192" 00:22:07.206 } 00:22:07.206 } 00:22:07.206 ]' 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.206 20:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.466 20:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.466 20:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.466 20:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.725 20:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:07.725 20:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.264 20:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.522 20:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.473 00:22:12.473 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.473 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.473 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.734 { 00:22:12.734 "cntlid": 43, 00:22:12.734 "qid": 0, 00:22:12.734 "state": "enabled", 00:22:12.734 "thread": "nvmf_tgt_poll_group_000", 00:22:12.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:12.734 "listen_address": { 00:22:12.734 "trtype": "TCP", 00:22:12.734 "adrfam": "IPv4", 00:22:12.734 "traddr": "10.0.0.2", 00:22:12.734 "trsvcid": "4420" 00:22:12.734 }, 00:22:12.734 "peer_address": { 00:22:12.734 "trtype": "TCP", 00:22:12.734 "adrfam": "IPv4", 00:22:12.734 "traddr": "10.0.0.1", 00:22:12.734 "trsvcid": "49236" 00:22:12.734 }, 00:22:12.734 "auth": { 00:22:12.734 "state": "completed", 00:22:12.734 "digest": "sha256", 00:22:12.734 "dhgroup": "ffdhe8192" 00:22:12.734 } 00:22:12.734 } 00:22:12.734 ]' 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.734 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.993 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.993 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.993 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.993 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.993 20:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.252 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:13.252 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:15.158 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.158 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.159 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.729 20:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.639 00:22:17.639 20:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.639 20:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.639 20:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.900 { 00:22:17.900 "cntlid": 45, 00:22:17.900 "qid": 0, 00:22:17.900 "state": "enabled", 00:22:17.900 "thread": "nvmf_tgt_poll_group_000", 00:22:17.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:17.900 "listen_address": { 00:22:17.900 "trtype": "TCP", 00:22:17.900 "adrfam": "IPv4", 00:22:17.900 "traddr": "10.0.0.2", 00:22:17.900 "trsvcid": "4420" 00:22:17.900 }, 00:22:17.900 "peer_address": { 00:22:17.900 "trtype": "TCP", 00:22:17.900 "adrfam": "IPv4", 00:22:17.900 "traddr": "10.0.0.1", 00:22:17.900 "trsvcid": "40050" 00:22:17.900 }, 00:22:17.900 "auth": { 00:22:17.900 "state": "completed", 00:22:17.900 "digest": "sha256", 00:22:17.900 "dhgroup": "ffdhe8192" 00:22:17.900 } 00:22:17.900 } 00:22:17.900 ]' 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.900 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.160 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.160 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.160 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.161 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.161 20:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.730 20:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:18.730 20:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.639 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.898 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.158 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.158 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:21.158 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.158 20:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.539 00:22:22.539 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.539 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.539 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.109 { 00:22:23.109 "cntlid": 47, 00:22:23.109 "qid": 0, 00:22:23.109 "state": "enabled", 00:22:23.109 "thread": "nvmf_tgt_poll_group_000", 00:22:23.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:23.109 "listen_address": { 00:22:23.109 "trtype": "TCP", 00:22:23.109 "adrfam": "IPv4", 00:22:23.109 "traddr": "10.0.0.2", 00:22:23.109 "trsvcid": "4420" 00:22:23.109 }, 00:22:23.109 "peer_address": { 00:22:23.109 "trtype": "TCP", 00:22:23.109 "adrfam": "IPv4", 00:22:23.109 "traddr": "10.0.0.1", 00:22:23.109 "trsvcid": "40082" 00:22:23.109 }, 00:22:23.109 "auth": { 00:22:23.109 "state": "completed", 00:22:23.109 "digest": "sha256", 00:22:23.109 "dhgroup": "ffdhe8192" 00:22:23.109 } 00:22:23.109 } 00:22:23.109 ]' 00:22:23.109 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.368 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.368 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.368 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.368 20:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.368 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.368 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.368 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.937 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:23.937 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:25.845 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.414 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.415 20:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.984 00:22:26.984 20:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.984 20:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.984 20:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.554 { 00:22:27.554 "cntlid": 49, 00:22:27.554 "qid": 0, 00:22:27.554 "state": "enabled", 00:22:27.554 "thread": "nvmf_tgt_poll_group_000", 00:22:27.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:27.554 "listen_address": { 00:22:27.554 "trtype": "TCP", 00:22:27.554 "adrfam": "IPv4", 00:22:27.554 "traddr": "10.0.0.2", 00:22:27.554 "trsvcid": "4420" 00:22:27.554 }, 00:22:27.554 "peer_address": { 00:22:27.554 "trtype": "TCP", 00:22:27.554 "adrfam": "IPv4", 00:22:27.554 "traddr": "10.0.0.1", 00:22:27.554 "trsvcid": "48472" 00:22:27.554 }, 00:22:27.554 "auth": { 00:22:27.554 "state": "completed", 00:22:27.554 "digest": "sha384", 00:22:27.554 "dhgroup": "null" 00:22:27.554 } 00:22:27.554 } 00:22:27.554 ]' 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.554 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.122 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:28.122 20:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.027 20:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.596 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.855 00:22:30.855 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.855 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.855 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.425 { 00:22:31.425 "cntlid": 51, 00:22:31.425 "qid": 0, 00:22:31.425 "state": "enabled", 00:22:31.425 "thread": "nvmf_tgt_poll_group_000", 00:22:31.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:31.425 "listen_address": { 00:22:31.425 "trtype": "TCP", 00:22:31.425 "adrfam": "IPv4", 00:22:31.425 "traddr": "10.0.0.2", 00:22:31.425 "trsvcid": "4420" 00:22:31.425 }, 00:22:31.425 "peer_address": { 00:22:31.425 "trtype": "TCP", 00:22:31.425 "adrfam": "IPv4", 00:22:31.425 "traddr": "10.0.0.1", 00:22:31.425 "trsvcid": "48504" 00:22:31.425 }, 00:22:31.425 "auth": { 00:22:31.425 "state": "completed", 00:22:31.425 "digest": "sha384", 00:22:31.425 "dhgroup": "null" 00:22:31.425 } 00:22:31.425 } 00:22:31.425 ]' 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.425 20:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.426 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:31.426 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.426 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.426 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.426 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.685 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:31.685 20:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.591 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.158 20:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.416 00:22:34.416 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.416 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.416 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.985 { 00:22:34.985 "cntlid": 53, 00:22:34.985 "qid": 0, 00:22:34.985 "state": "enabled", 00:22:34.985 "thread": "nvmf_tgt_poll_group_000", 00:22:34.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:34.985 "listen_address": { 00:22:34.985 "trtype": "TCP", 00:22:34.985 "adrfam": "IPv4", 00:22:34.985 "traddr": "10.0.0.2", 00:22:34.985 "trsvcid": "4420" 00:22:34.985 }, 00:22:34.985 "peer_address": { 00:22:34.985 "trtype": "TCP", 00:22:34.985 "adrfam": "IPv4", 00:22:34.985 "traddr": "10.0.0.1", 00:22:34.985 "trsvcid": "48530" 00:22:34.985 }, 00:22:34.985 "auth": { 00:22:34.985 "state": "completed", 00:22:34.985 "digest": "sha384", 00:22:34.985 "dhgroup": "null" 00:22:34.985 } 00:22:34.985 } 00:22:34.985 ]' 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.985 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.245 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:35.245 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.245 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.245 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.245 20:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.503 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:35.504 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:37.417 20:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.677 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.246 00:22:38.246 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.246 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.246 20:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.816 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.816 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.816 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.816 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.076 { 00:22:39.076 "cntlid": 55, 00:22:39.076 "qid": 0, 00:22:39.076 "state": "enabled", 00:22:39.076 "thread": "nvmf_tgt_poll_group_000", 00:22:39.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:39.076 "listen_address": { 00:22:39.076 "trtype": "TCP", 00:22:39.076 "adrfam": "IPv4", 00:22:39.076 "traddr": "10.0.0.2", 00:22:39.076 "trsvcid": "4420" 00:22:39.076 }, 00:22:39.076 "peer_address": { 00:22:39.076 "trtype": "TCP", 00:22:39.076 "adrfam": "IPv4", 00:22:39.076 "traddr": "10.0.0.1", 00:22:39.076 "trsvcid": "60298" 00:22:39.076 }, 00:22:39.076 "auth": { 00:22:39.076 "state": "completed", 00:22:39.076 "digest": "sha384", 00:22:39.076 "dhgroup": "null" 00:22:39.076 } 00:22:39.076 } 00:22:39.076 ]' 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.076 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.336 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:39.336 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:41.245 20:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.245 20:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.245 20:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.245 20:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.245 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.505 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.505 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.505 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:41.505 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.075 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.645 00:22:42.645 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.645 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.645 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.586 { 00:22:43.586 "cntlid": 57, 00:22:43.586 "qid": 0, 00:22:43.586 "state": "enabled", 00:22:43.586 "thread": "nvmf_tgt_poll_group_000", 00:22:43.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:43.586 "listen_address": { 00:22:43.586 "trtype": "TCP", 00:22:43.586 "adrfam": "IPv4", 00:22:43.586 "traddr": "10.0.0.2", 00:22:43.586 "trsvcid": "4420" 00:22:43.586 }, 00:22:43.586 "peer_address": { 00:22:43.586 "trtype": "TCP", 00:22:43.586 "adrfam": "IPv4", 00:22:43.586 "traddr": "10.0.0.1", 00:22:43.586 "trsvcid": "60332" 00:22:43.586 }, 00:22:43.586 "auth": { 00:22:43.586 "state": "completed", 00:22:43.586 "digest": "sha384", 00:22:43.586 "dhgroup": "ffdhe2048" 00:22:43.586 } 00:22:43.586 } 00:22:43.586 ]' 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.586 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.523 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:44.524 20:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:46.457 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.790 20:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.368 00:22:47.368 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.368 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.368 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.306 { 00:22:48.306 "cntlid": 59, 00:22:48.306 "qid": 0, 00:22:48.306 "state": "enabled", 00:22:48.306 "thread": "nvmf_tgt_poll_group_000", 00:22:48.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:48.306 "listen_address": { 00:22:48.306 "trtype": "TCP", 00:22:48.306 "adrfam": "IPv4", 00:22:48.306 "traddr": "10.0.0.2", 00:22:48.306 "trsvcid": "4420" 00:22:48.306 }, 00:22:48.306 "peer_address": { 00:22:48.306 "trtype": "TCP", 00:22:48.306 "adrfam": "IPv4", 00:22:48.306 "traddr": "10.0.0.1", 00:22:48.306 "trsvcid": "55286" 00:22:48.306 }, 00:22:48.306 "auth": { 00:22:48.306 "state": "completed", 00:22:48.306 "digest": "sha384", 00:22:48.306 "dhgroup": "ffdhe2048" 00:22:48.306 } 00:22:48.306 } 00:22:48.306 ]' 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:48.306 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.306 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.306 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.306 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.875 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:48.875 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:50.780 20:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.348 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.915 00:22:51.915 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.915 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.915 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.484 { 00:22:52.484 "cntlid": 61, 00:22:52.484 "qid": 0, 00:22:52.484 "state": "enabled", 00:22:52.484 "thread": "nvmf_tgt_poll_group_000", 00:22:52.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:52.484 "listen_address": { 00:22:52.484 "trtype": "TCP", 00:22:52.484 "adrfam": "IPv4", 00:22:52.484 "traddr": "10.0.0.2", 00:22:52.484 "trsvcid": "4420" 00:22:52.484 }, 00:22:52.484 "peer_address": { 00:22:52.484 "trtype": "TCP", 00:22:52.484 "adrfam": "IPv4", 00:22:52.484 "traddr": "10.0.0.1", 00:22:52.484 "trsvcid": "55320" 00:22:52.484 }, 00:22:52.484 "auth": { 00:22:52.484 "state": "completed", 00:22:52.484 "digest": "sha384", 00:22:52.484 "dhgroup": "ffdhe2048" 00:22:52.484 } 00:22:52.484 } 00:22:52.484 ]' 00:22:52.484 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.484 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.055 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:53.055 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.964 20:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:55.531 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.099 00:22:56.099 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.099 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.099 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.669 { 00:22:56.669 "cntlid": 63, 00:22:56.669 "qid": 0, 00:22:56.669 "state": "enabled", 00:22:56.669 "thread": "nvmf_tgt_poll_group_000", 00:22:56.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:56.669 "listen_address": { 00:22:56.669 "trtype": "TCP", 00:22:56.669 "adrfam": "IPv4", 00:22:56.669 "traddr": "10.0.0.2", 00:22:56.669 "trsvcid": "4420" 00:22:56.669 }, 00:22:56.669 "peer_address": { 00:22:56.669 "trtype": "TCP", 00:22:56.669 "adrfam": "IPv4", 00:22:56.669 "traddr": "10.0.0.1", 00:22:56.669 "trsvcid": "38550" 00:22:56.669 }, 00:22:56.669 "auth": { 00:22:56.669 "state": "completed", 00:22:56.669 "digest": "sha384", 00:22:56.669 "dhgroup": "ffdhe2048" 00:22:56.669 } 00:22:56.669 } 00:22:56.669 ]' 00:22:56.669 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.928 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.928 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.928 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:56.928 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.928 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.929 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.929 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.497 20:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:22:57.497 20:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.036 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.604 00:23:00.604 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.604 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.604 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.863 { 00:23:00.863 "cntlid": 65, 00:23:00.863 "qid": 0, 00:23:00.863 "state": "enabled", 00:23:00.863 "thread": "nvmf_tgt_poll_group_000", 00:23:00.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:00.863 "listen_address": { 00:23:00.863 "trtype": "TCP", 00:23:00.863 "adrfam": "IPv4", 00:23:00.863 "traddr": "10.0.0.2", 00:23:00.863 "trsvcid": "4420" 00:23:00.863 }, 00:23:00.863 "peer_address": { 00:23:00.863 "trtype": "TCP", 00:23:00.863 "adrfam": "IPv4", 00:23:00.863 "traddr": "10.0.0.1", 00:23:00.863 "trsvcid": "38582" 00:23:00.863 }, 00:23:00.863 "auth": { 00:23:00.863 "state": "completed", 00:23:00.863 "digest": "sha384", 00:23:00.863 "dhgroup": "ffdhe3072" 00:23:00.863 } 00:23:00.863 } 00:23:00.863 ]' 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.863 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.123 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:01.124 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.124 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.124 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.124 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.691 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:01.691 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:03.600 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:04.540 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:04.540 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.540 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:04.540 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:04.540 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.541 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.110 00:23:05.110 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.110 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.110 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.049 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.049 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.049 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.049 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.050 { 00:23:06.050 "cntlid": 67, 00:23:06.050 "qid": 0, 00:23:06.050 "state": "enabled", 00:23:06.050 "thread": "nvmf_tgt_poll_group_000", 00:23:06.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:06.050 "listen_address": { 00:23:06.050 "trtype": "TCP", 00:23:06.050 "adrfam": "IPv4", 00:23:06.050 "traddr": "10.0.0.2", 00:23:06.050 "trsvcid": "4420" 00:23:06.050 }, 00:23:06.050 "peer_address": { 00:23:06.050 "trtype": "TCP", 00:23:06.050 "adrfam": "IPv4", 00:23:06.050 "traddr": "10.0.0.1", 00:23:06.050 "trsvcid": "38608" 00:23:06.050 }, 00:23:06.050 "auth": { 00:23:06.050 "state": "completed", 00:23:06.050 "digest": "sha384", 00:23:06.050 "dhgroup": "ffdhe3072" 00:23:06.050 } 00:23:06.050 } 00:23:06.050 ]' 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.050 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.621 20:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:06.621 20:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:08.531 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.100 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.055 00:23:10.055 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.055 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.055 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.315 { 00:23:10.315 "cntlid": 69, 00:23:10.315 "qid": 0, 00:23:10.315 "state": "enabled", 00:23:10.315 "thread": "nvmf_tgt_poll_group_000", 00:23:10.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:10.315 "listen_address": { 00:23:10.315 "trtype": "TCP", 00:23:10.315 "adrfam": "IPv4", 00:23:10.315 "traddr": "10.0.0.2", 00:23:10.315 "trsvcid": "4420" 00:23:10.315 }, 00:23:10.315 "peer_address": { 00:23:10.315 "trtype": "TCP", 00:23:10.315 "adrfam": "IPv4", 00:23:10.315 "traddr": "10.0.0.1", 00:23:10.315 "trsvcid": "41916" 00:23:10.315 }, 00:23:10.315 "auth": { 00:23:10.315 "state": "completed", 00:23:10.315 "digest": "sha384", 00:23:10.315 "dhgroup": "ffdhe3072" 00:23:10.315 } 00:23:10.315 } 00:23:10.315 ]' 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:10.315 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.315 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:10.315 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.315 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.315 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.316 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.885 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:10.885 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.795 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:13.053 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.054 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.624 00:23:13.624 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.624 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.624 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.884 { 00:23:13.884 "cntlid": 71, 00:23:13.884 "qid": 0, 00:23:13.884 "state": "enabled", 00:23:13.884 "thread": "nvmf_tgt_poll_group_000", 00:23:13.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:13.884 "listen_address": { 00:23:13.884 "trtype": "TCP", 00:23:13.884 "adrfam": "IPv4", 00:23:13.884 "traddr": "10.0.0.2", 00:23:13.884 "trsvcid": "4420" 00:23:13.884 }, 00:23:13.884 "peer_address": { 00:23:13.884 "trtype": "TCP", 00:23:13.884 "adrfam": "IPv4", 00:23:13.884 "traddr": "10.0.0.1", 00:23:13.884 "trsvcid": "41942" 00:23:13.884 }, 00:23:13.884 "auth": { 00:23:13.884 "state": "completed", 00:23:13.884 "digest": "sha384", 00:23:13.884 "dhgroup": "ffdhe3072" 00:23:13.884 } 00:23:13.884 } 00:23:13.884 ]' 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.884 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.145 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:14.145 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.145 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.145 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.145 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.714 20:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:14.715 20:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.621 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.880 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.450 00:23:17.450 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.450 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.450 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.019 { 00:23:18.019 "cntlid": 73, 00:23:18.019 "qid": 0, 00:23:18.019 "state": "enabled", 00:23:18.019 "thread": "nvmf_tgt_poll_group_000", 00:23:18.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:18.019 "listen_address": { 00:23:18.019 "trtype": "TCP", 00:23:18.019 "adrfam": "IPv4", 00:23:18.019 "traddr": "10.0.0.2", 00:23:18.019 "trsvcid": "4420" 00:23:18.019 }, 00:23:18.019 "peer_address": { 00:23:18.019 "trtype": "TCP", 00:23:18.019 "adrfam": "IPv4", 00:23:18.019 "traddr": "10.0.0.1", 00:23:18.019 "trsvcid": "42422" 00:23:18.019 }, 00:23:18.019 "auth": { 00:23:18.019 "state": "completed", 00:23:18.019 "digest": "sha384", 00:23:18.019 "dhgroup": "ffdhe4096" 00:23:18.019 } 00:23:18.019 } 00:23:18.019 ]' 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.019 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.588 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:18.588 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:20.498 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.498 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:20.498 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.498 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.498 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.499 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.499 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.499 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.069 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.701 00:23:21.701 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.701 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.701 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.270 { 00:23:22.270 "cntlid": 75, 00:23:22.270 "qid": 0, 00:23:22.270 "state": "enabled", 00:23:22.270 "thread": "nvmf_tgt_poll_group_000", 00:23:22.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:22.270 "listen_address": { 00:23:22.270 "trtype": "TCP", 00:23:22.270 "adrfam": "IPv4", 00:23:22.270 "traddr": "10.0.0.2", 00:23:22.270 "trsvcid": "4420" 00:23:22.270 }, 00:23:22.270 "peer_address": { 00:23:22.270 "trtype": "TCP", 00:23:22.270 "adrfam": "IPv4", 00:23:22.270 "traddr": "10.0.0.1", 00:23:22.270 "trsvcid": "42446" 00:23:22.270 }, 00:23:22.270 "auth": { 00:23:22.270 "state": "completed", 00:23:22.270 "digest": "sha384", 00:23:22.270 "dhgroup": "ffdhe4096" 00:23:22.270 } 00:23:22.270 } 00:23:22.270 ]' 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.270 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.270 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:22.270 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.529 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.529 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.529 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.787 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:22.787 20:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.163 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.421 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.360 00:23:25.360 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.360 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.360 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.620 { 00:23:25.620 "cntlid": 77, 00:23:25.620 "qid": 0, 00:23:25.620 "state": "enabled", 00:23:25.620 "thread": "nvmf_tgt_poll_group_000", 00:23:25.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:25.620 "listen_address": { 00:23:25.620 "trtype": "TCP", 00:23:25.620 "adrfam": "IPv4", 00:23:25.620 "traddr": "10.0.0.2", 00:23:25.620 "trsvcid": "4420" 00:23:25.620 }, 00:23:25.620 "peer_address": { 00:23:25.620 "trtype": "TCP", 00:23:25.620 "adrfam": "IPv4", 00:23:25.620 "traddr": "10.0.0.1", 00:23:25.620 "trsvcid": "42478" 00:23:25.620 }, 00:23:25.620 "auth": { 00:23:25.620 "state": "completed", 00:23:25.620 "digest": "sha384", 00:23:25.620 "dhgroup": "ffdhe4096" 00:23:25.620 } 00:23:25.620 } 00:23:25.620 ]' 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.620 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.880 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:25.880 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.785 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.354 20:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.300 00:23:29.300 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.300 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.300 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.866 { 00:23:29.866 "cntlid": 79, 00:23:29.866 "qid": 0, 00:23:29.866 "state": "enabled", 00:23:29.866 "thread": "nvmf_tgt_poll_group_000", 00:23:29.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:29.866 "listen_address": { 00:23:29.866 "trtype": "TCP", 00:23:29.866 "adrfam": "IPv4", 00:23:29.866 "traddr": "10.0.0.2", 00:23:29.866 "trsvcid": "4420" 00:23:29.866 }, 00:23:29.866 "peer_address": { 00:23:29.866 "trtype": "TCP", 00:23:29.866 "adrfam": "IPv4", 00:23:29.866 "traddr": "10.0.0.1", 00:23:29.866 "trsvcid": "47208" 00:23:29.866 }, 00:23:29.866 "auth": { 00:23:29.866 "state": "completed", 00:23:29.866 "digest": "sha384", 00:23:29.866 "dhgroup": "ffdhe4096" 00:23:29.866 } 00:23:29.866 } 00:23:29.866 ]' 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.866 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.435 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:30.435 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.345 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.603 20:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.542 00:23:33.542 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.542 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.542 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.802 { 00:23:33.802 "cntlid": 81, 00:23:33.802 "qid": 0, 00:23:33.802 "state": "enabled", 00:23:33.802 "thread": "nvmf_tgt_poll_group_000", 00:23:33.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:33.802 "listen_address": { 00:23:33.802 "trtype": "TCP", 00:23:33.802 "adrfam": "IPv4", 00:23:33.802 "traddr": "10.0.0.2", 00:23:33.802 "trsvcid": "4420" 00:23:33.802 }, 00:23:33.802 "peer_address": { 00:23:33.802 "trtype": "TCP", 00:23:33.802 "adrfam": "IPv4", 00:23:33.802 "traddr": "10.0.0.1", 00:23:33.802 "trsvcid": "47238" 00:23:33.802 }, 00:23:33.802 "auth": { 00:23:33.802 "state": "completed", 00:23:33.802 "digest": "sha384", 00:23:33.802 "dhgroup": "ffdhe6144" 00:23:33.802 } 00:23:33.802 } 00:23:33.802 ]' 00:23:33.802 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.062 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.062 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.062 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:34.062 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.062 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.063 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.063 20:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.629 20:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:34.629 20:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:36.532 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.099 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.358 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.358 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.358 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.358 20:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.294 00:23:38.294 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.294 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.294 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.553 { 00:23:38.553 "cntlid": 83, 00:23:38.553 "qid": 0, 00:23:38.553 "state": "enabled", 00:23:38.553 "thread": "nvmf_tgt_poll_group_000", 00:23:38.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:38.553 "listen_address": { 00:23:38.553 "trtype": "TCP", 00:23:38.553 "adrfam": "IPv4", 00:23:38.553 "traddr": "10.0.0.2", 00:23:38.553 "trsvcid": "4420" 00:23:38.553 }, 00:23:38.553 "peer_address": { 00:23:38.553 "trtype": "TCP", 00:23:38.553 "adrfam": "IPv4", 00:23:38.553 "traddr": "10.0.0.1", 00:23:38.553 "trsvcid": "53890" 00:23:38.553 }, 00:23:38.553 "auth": { 00:23:38.553 "state": "completed", 00:23:38.553 "digest": "sha384", 00:23:38.553 "dhgroup": "ffdhe6144" 00:23:38.553 } 00:23:38.553 } 00:23:38.553 ]' 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.553 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.121 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:39.121 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.025 20:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.593 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.594 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.594 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.594 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.594 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.531 00:23:42.531 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.531 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.531 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.096 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.096 { 00:23:43.096 "cntlid": 85, 00:23:43.096 "qid": 0, 00:23:43.096 "state": "enabled", 00:23:43.096 "thread": "nvmf_tgt_poll_group_000", 00:23:43.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:43.096 "listen_address": { 00:23:43.096 "trtype": "TCP", 00:23:43.096 "adrfam": "IPv4", 00:23:43.096 "traddr": "10.0.0.2", 00:23:43.096 "trsvcid": "4420" 00:23:43.096 }, 00:23:43.096 "peer_address": { 00:23:43.096 "trtype": "TCP", 00:23:43.096 "adrfam": "IPv4", 00:23:43.096 "traddr": "10.0.0.1", 00:23:43.096 "trsvcid": "53916" 00:23:43.096 }, 00:23:43.096 "auth": { 00:23:43.096 "state": "completed", 00:23:43.096 "digest": "sha384", 00:23:43.096 "dhgroup": "ffdhe6144" 00:23:43.096 } 00:23:43.097 } 00:23:43.097 ]' 00:23:43.097 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.355 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:43.355 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.355 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:43.355 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.355 20:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.355 20:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.355 20:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.922 20:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:43.922 20:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.825 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.826 20:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:47.201 00:23:47.201 20:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.201 20:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.201 20:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.465 { 00:23:47.465 "cntlid": 87, 00:23:47.465 "qid": 0, 00:23:47.465 "state": "enabled", 00:23:47.465 "thread": "nvmf_tgt_poll_group_000", 00:23:47.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:47.465 "listen_address": { 00:23:47.465 "trtype": "TCP", 00:23:47.465 "adrfam": "IPv4", 00:23:47.465 "traddr": "10.0.0.2", 00:23:47.465 "trsvcid": "4420" 00:23:47.465 }, 00:23:47.465 "peer_address": { 00:23:47.465 "trtype": "TCP", 00:23:47.465 "adrfam": "IPv4", 00:23:47.465 "traddr": "10.0.0.1", 00:23:47.465 "trsvcid": "33572" 00:23:47.465 }, 00:23:47.465 "auth": { 00:23:47.465 "state": "completed", 00:23:47.465 "digest": "sha384", 00:23:47.465 "dhgroup": "ffdhe6144" 00:23:47.465 } 00:23:47.465 } 00:23:47.465 ]' 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:47.465 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.768 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:47.768 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.768 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.768 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.768 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.053 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:48.053 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.963 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.221 20:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.127 00:23:52.127 20:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.127 20:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.128 20:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.699 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.699 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.699 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.699 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.700 { 00:23:52.700 "cntlid": 89, 00:23:52.700 "qid": 0, 00:23:52.700 "state": "enabled", 00:23:52.700 "thread": "nvmf_tgt_poll_group_000", 00:23:52.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:52.700 "listen_address": { 00:23:52.700 "trtype": "TCP", 00:23:52.700 "adrfam": "IPv4", 00:23:52.700 "traddr": "10.0.0.2", 00:23:52.700 "trsvcid": "4420" 00:23:52.700 }, 00:23:52.700 "peer_address": { 00:23:52.700 "trtype": "TCP", 00:23:52.700 "adrfam": "IPv4", 00:23:52.700 "traddr": "10.0.0.1", 00:23:52.700 "trsvcid": "33588" 00:23:52.700 }, 00:23:52.700 "auth": { 00:23:52.700 "state": "completed", 00:23:52.700 "digest": "sha384", 00:23:52.700 "dhgroup": "ffdhe8192" 00:23:52.700 } 00:23:52.700 } 00:23:52.700 ]' 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.700 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.268 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:53.269 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.803 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.803 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.713 00:23:57.972 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.972 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.972 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.231 { 00:23:58.231 "cntlid": 91, 00:23:58.231 "qid": 0, 00:23:58.231 "state": "enabled", 00:23:58.231 "thread": "nvmf_tgt_poll_group_000", 00:23:58.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:58.231 "listen_address": { 00:23:58.231 "trtype": "TCP", 00:23:58.231 "adrfam": "IPv4", 00:23:58.231 "traddr": "10.0.0.2", 00:23:58.231 "trsvcid": "4420" 00:23:58.231 }, 00:23:58.231 "peer_address": { 00:23:58.231 "trtype": "TCP", 00:23:58.231 "adrfam": "IPv4", 00:23:58.231 "traddr": "10.0.0.1", 00:23:58.231 "trsvcid": "47652" 00:23:58.231 }, 00:23:58.231 "auth": { 00:23:58.231 "state": "completed", 00:23:58.231 "digest": "sha384", 00:23:58.231 "dhgroup": "ffdhe8192" 00:23:58.231 } 00:23:58.231 } 00:23:58.231 ]' 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:58.231 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.491 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.491 20:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.491 20:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.059 20:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:23:59.059 20:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:00.969 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.228 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.229 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.609 00:24:02.609 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:02.609 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:02.609 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.176 { 00:24:03.176 "cntlid": 93, 00:24:03.176 "qid": 0, 00:24:03.176 "state": "enabled", 00:24:03.176 "thread": "nvmf_tgt_poll_group_000", 00:24:03.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:03.176 "listen_address": { 00:24:03.176 "trtype": "TCP", 00:24:03.176 "adrfam": "IPv4", 00:24:03.176 "traddr": "10.0.0.2", 00:24:03.176 "trsvcid": "4420" 00:24:03.176 }, 00:24:03.176 "peer_address": { 00:24:03.176 "trtype": "TCP", 00:24:03.176 "adrfam": "IPv4", 00:24:03.176 "traddr": "10.0.0.1", 00:24:03.176 "trsvcid": "47668" 00:24:03.176 }, 00:24:03.176 "auth": { 00:24:03.176 "state": "completed", 00:24:03.176 "digest": "sha384", 00:24:03.176 "dhgroup": "ffdhe8192" 00:24:03.176 } 00:24:03.176 } 00:24:03.176 ]' 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:03.176 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.436 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.436 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.436 20:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.004 20:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:04.004 20:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.906 20:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:07.807 00:24:07.807 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:07.807 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:07.807 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:08.065 { 00:24:08.065 "cntlid": 95, 00:24:08.065 "qid": 0, 00:24:08.065 "state": "enabled", 00:24:08.065 "thread": "nvmf_tgt_poll_group_000", 00:24:08.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:08.065 "listen_address": { 00:24:08.065 "trtype": "TCP", 00:24:08.065 "adrfam": "IPv4", 00:24:08.065 "traddr": "10.0.0.2", 00:24:08.065 "trsvcid": "4420" 00:24:08.065 }, 00:24:08.065 "peer_address": { 00:24:08.065 "trtype": "TCP", 00:24:08.065 "adrfam": "IPv4", 00:24:08.065 "traddr": "10.0.0.1", 00:24:08.065 "trsvcid": "43356" 00:24:08.065 }, 00:24:08.065 "auth": { 00:24:08.065 "state": "completed", 00:24:08.065 "digest": "sha384", 00:24:08.065 "dhgroup": "ffdhe8192" 00:24:08.065 } 00:24:08.065 } 00:24:08.065 ]' 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:08.065 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:08.323 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:08.323 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:08.323 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.323 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.323 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.888 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:08.888 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:10.789 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.048 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.615 00:24:11.615 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:11.615 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:11.615 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.181 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:12.181 { 00:24:12.181 "cntlid": 97, 00:24:12.181 "qid": 0, 00:24:12.181 "state": "enabled", 00:24:12.181 "thread": "nvmf_tgt_poll_group_000", 00:24:12.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:12.182 "listen_address": { 00:24:12.182 "trtype": "TCP", 00:24:12.182 "adrfam": "IPv4", 00:24:12.182 "traddr": "10.0.0.2", 00:24:12.182 "trsvcid": "4420" 00:24:12.182 }, 00:24:12.182 "peer_address": { 00:24:12.182 "trtype": "TCP", 00:24:12.182 "adrfam": "IPv4", 00:24:12.182 "traddr": "10.0.0.1", 00:24:12.182 "trsvcid": "43380" 00:24:12.182 }, 00:24:12.182 "auth": { 00:24:12.182 "state": "completed", 00:24:12.182 "digest": "sha512", 00:24:12.182 "dhgroup": "null" 00:24:12.182 } 00:24:12.182 } 00:24:12.182 ]' 00:24:12.182 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:12.182 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:12.182 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:12.182 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:12.182 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:12.440 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.440 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.440 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.699 20:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:12.699 20:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:14.605 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.175 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.745 00:24:15.745 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:15.745 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:15.745 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:16.681 { 00:24:16.681 "cntlid": 99, 00:24:16.681 "qid": 0, 00:24:16.681 "state": "enabled", 00:24:16.681 "thread": "nvmf_tgt_poll_group_000", 00:24:16.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:16.681 "listen_address": { 00:24:16.681 "trtype": "TCP", 00:24:16.681 "adrfam": "IPv4", 00:24:16.681 "traddr": "10.0.0.2", 00:24:16.681 "trsvcid": "4420" 00:24:16.681 }, 00:24:16.681 "peer_address": { 00:24:16.681 "trtype": "TCP", 00:24:16.681 "adrfam": "IPv4", 00:24:16.681 "traddr": "10.0.0.1", 00:24:16.681 "trsvcid": "43214" 00:24:16.681 }, 00:24:16.681 "auth": { 00:24:16.681 "state": "completed", 00:24:16.681 "digest": "sha512", 00:24:16.681 "dhgroup": "null" 00:24:16.681 } 00:24:16.681 } 00:24:16.681 ]' 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.681 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.940 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:16.940 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:18.848 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.447 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.025 00:24:20.025 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:20.025 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:20.025 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:20.596 { 00:24:20.596 "cntlid": 101, 00:24:20.596 "qid": 0, 00:24:20.596 "state": "enabled", 00:24:20.596 "thread": "nvmf_tgt_poll_group_000", 00:24:20.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:20.596 "listen_address": { 00:24:20.596 "trtype": "TCP", 00:24:20.596 "adrfam": "IPv4", 00:24:20.596 "traddr": "10.0.0.2", 00:24:20.596 "trsvcid": "4420" 00:24:20.596 }, 00:24:20.596 "peer_address": { 00:24:20.596 "trtype": "TCP", 00:24:20.596 "adrfam": "IPv4", 00:24:20.596 "traddr": "10.0.0.1", 00:24:20.596 "trsvcid": "43244" 00:24:20.596 }, 00:24:20.596 "auth": { 00:24:20.596 "state": "completed", 00:24:20.596 "digest": "sha512", 00:24:20.596 "dhgroup": "null" 00:24:20.596 } 00:24:20.596 } 00:24:20.596 ]' 00:24:20.596 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.854 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.113 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:21.113 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:23.653 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:23.653 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:24.223 00:24:24.484 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:24.484 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:24.484 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.053 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.053 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.053 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.053 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.053 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:25.054 { 00:24:25.054 "cntlid": 103, 00:24:25.054 "qid": 0, 00:24:25.054 "state": "enabled", 00:24:25.054 "thread": "nvmf_tgt_poll_group_000", 00:24:25.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:25.054 "listen_address": { 00:24:25.054 "trtype": "TCP", 00:24:25.054 "adrfam": "IPv4", 00:24:25.054 "traddr": "10.0.0.2", 00:24:25.054 "trsvcid": "4420" 00:24:25.054 }, 00:24:25.054 "peer_address": { 00:24:25.054 "trtype": "TCP", 00:24:25.054 "adrfam": "IPv4", 00:24:25.054 "traddr": "10.0.0.1", 00:24:25.054 "trsvcid": "43274" 00:24:25.054 }, 00:24:25.054 "auth": { 00:24:25.054 "state": "completed", 00:24:25.054 "digest": "sha512", 00:24:25.054 "dhgroup": "null" 00:24:25.054 } 00:24:25.054 } 00:24:25.054 ]' 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.054 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.623 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:25.623 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:27.533 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:27.791 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.050 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.986 00:24:28.986 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:28.986 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:28.986 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.244 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:29.244 { 00:24:29.244 "cntlid": 105, 00:24:29.244 "qid": 0, 00:24:29.244 "state": "enabled", 00:24:29.244 "thread": "nvmf_tgt_poll_group_000", 00:24:29.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:29.244 "listen_address": { 00:24:29.244 "trtype": "TCP", 00:24:29.244 "adrfam": "IPv4", 00:24:29.244 "traddr": "10.0.0.2", 00:24:29.244 "trsvcid": "4420" 00:24:29.244 }, 00:24:29.244 "peer_address": { 00:24:29.244 "trtype": "TCP", 00:24:29.245 "adrfam": "IPv4", 00:24:29.245 "traddr": "10.0.0.1", 00:24:29.245 "trsvcid": "48058" 00:24:29.245 }, 00:24:29.245 "auth": { 00:24:29.245 "state": "completed", 00:24:29.245 "digest": "sha512", 00:24:29.245 "dhgroup": "ffdhe2048" 00:24:29.245 } 00:24:29.245 } 00:24:29.245 ]' 00:24:29.245 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:29.245 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:29.245 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:29.503 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:29.503 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:29.503 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.503 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.503 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.765 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:29.765 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.670 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.236 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.495 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.495 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.495 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.495 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.061 00:24:33.061 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:33.061 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:33.061 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:33.627 { 00:24:33.627 "cntlid": 107, 00:24:33.627 "qid": 0, 00:24:33.627 "state": "enabled", 00:24:33.627 "thread": "nvmf_tgt_poll_group_000", 00:24:33.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:33.627 "listen_address": { 00:24:33.627 "trtype": "TCP", 00:24:33.627 "adrfam": "IPv4", 00:24:33.627 "traddr": "10.0.0.2", 00:24:33.627 "trsvcid": "4420" 00:24:33.627 }, 00:24:33.627 "peer_address": { 00:24:33.627 "trtype": "TCP", 00:24:33.627 "adrfam": "IPv4", 00:24:33.627 "traddr": "10.0.0.1", 00:24:33.627 "trsvcid": "48098" 00:24:33.627 }, 00:24:33.627 "auth": { 00:24:33.627 "state": "completed", 00:24:33.627 "digest": "sha512", 00:24:33.627 "dhgroup": "ffdhe2048" 00:24:33.627 } 00:24:33.627 } 00:24:33.627 ]' 00:24:33.627 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.886 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.145 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:34.145 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.675 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.934 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.502 00:24:37.502 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:37.502 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:37.502 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.072 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:38.073 { 00:24:38.073 "cntlid": 109, 00:24:38.073 "qid": 0, 00:24:38.073 "state": "enabled", 00:24:38.073 "thread": "nvmf_tgt_poll_group_000", 00:24:38.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:38.073 "listen_address": { 00:24:38.073 "trtype": "TCP", 00:24:38.073 "adrfam": "IPv4", 00:24:38.073 "traddr": "10.0.0.2", 00:24:38.073 "trsvcid": "4420" 00:24:38.073 }, 00:24:38.073 "peer_address": { 00:24:38.073 "trtype": "TCP", 00:24:38.073 "adrfam": "IPv4", 00:24:38.073 "traddr": "10.0.0.1", 00:24:38.073 "trsvcid": "56448" 00:24:38.073 }, 00:24:38.073 "auth": { 00:24:38.073 "state": "completed", 00:24:38.073 "digest": "sha512", 00:24:38.073 "dhgroup": "ffdhe2048" 00:24:38.073 } 00:24:38.073 } 00:24:38.073 ]' 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.073 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.332 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.591 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:38.591 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:40.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:40.499 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.067 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.327 00:24:41.587 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:41.587 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:41.587 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:42.156 { 00:24:42.156 "cntlid": 111, 00:24:42.156 "qid": 0, 00:24:42.156 "state": "enabled", 00:24:42.156 "thread": "nvmf_tgt_poll_group_000", 00:24:42.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:42.156 "listen_address": { 00:24:42.156 "trtype": "TCP", 00:24:42.156 "adrfam": "IPv4", 00:24:42.156 "traddr": "10.0.0.2", 00:24:42.156 "trsvcid": "4420" 00:24:42.156 }, 00:24:42.156 "peer_address": { 00:24:42.156 "trtype": "TCP", 00:24:42.156 "adrfam": "IPv4", 00:24:42.156 "traddr": "10.0.0.1", 00:24:42.156 "trsvcid": "56464" 00:24:42.156 }, 00:24:42.156 "auth": { 00:24:42.156 "state": "completed", 00:24:42.156 "digest": "sha512", 00:24:42.156 "dhgroup": "ffdhe2048" 00:24:42.156 } 00:24:42.156 } 00:24:42.156 ]' 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.156 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.415 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:42.415 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:44.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:44.950 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:45.221 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:24:45.221 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:45.221 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.222 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.159 00:24:46.159 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:46.159 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.159 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:46.416 { 00:24:46.416 "cntlid": 113, 00:24:46.416 "qid": 0, 00:24:46.416 "state": "enabled", 00:24:46.416 "thread": "nvmf_tgt_poll_group_000", 00:24:46.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:46.416 "listen_address": { 00:24:46.416 "trtype": "TCP", 00:24:46.416 "adrfam": "IPv4", 00:24:46.416 "traddr": "10.0.0.2", 00:24:46.416 "trsvcid": "4420" 00:24:46.416 }, 00:24:46.416 "peer_address": { 00:24:46.416 "trtype": "TCP", 00:24:46.416 "adrfam": "IPv4", 00:24:46.416 "traddr": "10.0.0.1", 00:24:46.416 "trsvcid": "54564" 00:24:46.416 }, 00:24:46.416 "auth": { 00:24:46.416 "state": "completed", 00:24:46.416 "digest": "sha512", 00:24:46.416 "dhgroup": "ffdhe3072" 00:24:46.416 } 00:24:46.416 } 00:24:46.416 ]' 00:24:46.416 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.416 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.999 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:46.999 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:48.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.425 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.684 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.622 00:24:49.622 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:49.622 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:49.622 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:49.880 { 00:24:49.880 "cntlid": 115, 00:24:49.880 "qid": 0, 00:24:49.880 "state": "enabled", 00:24:49.880 "thread": "nvmf_tgt_poll_group_000", 00:24:49.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:49.880 "listen_address": { 00:24:49.880 "trtype": "TCP", 00:24:49.880 "adrfam": "IPv4", 00:24:49.880 "traddr": "10.0.0.2", 00:24:49.880 "trsvcid": "4420" 00:24:49.880 }, 00:24:49.880 "peer_address": { 00:24:49.880 "trtype": "TCP", 00:24:49.880 "adrfam": "IPv4", 00:24:49.880 "traddr": "10.0.0.1", 00:24:49.880 "trsvcid": "54584" 00:24:49.880 }, 00:24:49.880 "auth": { 00:24:49.880 "state": "completed", 00:24:49.880 "digest": "sha512", 00:24:49.880 "dhgroup": "ffdhe3072" 00:24:49.880 } 00:24:49.880 } 00:24:49.880 ]' 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:49.880 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:50.140 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:50.140 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:50.140 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.140 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.140 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.709 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:50.709 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:52.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:52.088 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.347 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.605 00:24:52.605 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:52.605 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:52.605 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:53.172 { 00:24:53.172 "cntlid": 117, 00:24:53.172 "qid": 0, 00:24:53.172 "state": "enabled", 00:24:53.172 "thread": "nvmf_tgt_poll_group_000", 00:24:53.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:53.172 "listen_address": { 00:24:53.172 "trtype": "TCP", 00:24:53.172 "adrfam": "IPv4", 00:24:53.172 "traddr": "10.0.0.2", 00:24:53.172 "trsvcid": "4420" 00:24:53.172 }, 00:24:53.172 "peer_address": { 00:24:53.172 "trtype": "TCP", 00:24:53.172 "adrfam": "IPv4", 00:24:53.172 "traddr": "10.0.0.1", 00:24:53.172 "trsvcid": "54604" 00:24:53.172 }, 00:24:53.172 "auth": { 00:24:53.172 "state": "completed", 00:24:53.172 "digest": "sha512", 00:24:53.172 "dhgroup": "ffdhe3072" 00:24:53.172 } 00:24:53.172 } 00:24:53.172 ]' 00:24:53.172 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:53.431 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:53.431 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:53.431 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:53.431 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:53.431 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.431 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.431 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.997 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:53.997 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:55.900 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:56.158 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:56.725 00:24:56.725 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:56.725 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.725 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:56.984 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.984 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.984 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.984 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:57.242 { 00:24:57.242 "cntlid": 119, 00:24:57.242 "qid": 0, 00:24:57.242 "state": "enabled", 00:24:57.242 "thread": "nvmf_tgt_poll_group_000", 00:24:57.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:57.242 "listen_address": { 00:24:57.242 "trtype": "TCP", 00:24:57.242 "adrfam": "IPv4", 00:24:57.242 "traddr": "10.0.0.2", 00:24:57.242 "trsvcid": "4420" 00:24:57.242 }, 00:24:57.242 "peer_address": { 00:24:57.242 "trtype": "TCP", 00:24:57.242 "adrfam": "IPv4", 00:24:57.242 "traddr": "10.0.0.1", 00:24:57.242 "trsvcid": "36446" 00:24:57.242 }, 00:24:57.242 "auth": { 00:24:57.242 "state": "completed", 00:24:57.242 "digest": "sha512", 00:24:57.242 "dhgroup": "ffdhe3072" 00:24:57.242 } 00:24:57.242 } 00:24:57.242 ]' 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:57.242 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.500 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:57.500 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:59.406 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:59.666 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:24:59.666 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:59.666 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.667 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.239 00:25:00.498 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:00.498 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:00.498 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:00.757 { 00:25:00.757 "cntlid": 121, 00:25:00.757 "qid": 0, 00:25:00.757 "state": "enabled", 00:25:00.757 "thread": "nvmf_tgt_poll_group_000", 00:25:00.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:00.757 "listen_address": { 00:25:00.757 "trtype": "TCP", 00:25:00.757 "adrfam": "IPv4", 00:25:00.757 "traddr": "10.0.0.2", 00:25:00.757 "trsvcid": "4420" 00:25:00.757 }, 00:25:00.757 "peer_address": { 00:25:00.757 "trtype": "TCP", 00:25:00.757 "adrfam": "IPv4", 00:25:00.757 "traddr": "10.0.0.1", 00:25:00.757 "trsvcid": "36478" 00:25:00.757 }, 00:25:00.757 "auth": { 00:25:00.757 "state": "completed", 00:25:00.757 "digest": "sha512", 00:25:00.757 "dhgroup": "ffdhe4096" 00:25:00.757 } 00:25:00.757 } 00:25:00.757 ]' 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:00.757 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:01.015 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:01.015 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:01.015 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:01.274 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:01.274 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:03.178 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.437 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.374 00:25:04.374 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:04.374 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:04.374 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:05.311 { 00:25:05.311 "cntlid": 123, 00:25:05.311 "qid": 0, 00:25:05.311 "state": "enabled", 00:25:05.311 "thread": "nvmf_tgt_poll_group_000", 00:25:05.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:05.311 "listen_address": { 00:25:05.311 "trtype": "TCP", 00:25:05.311 "adrfam": "IPv4", 00:25:05.311 "traddr": "10.0.0.2", 00:25:05.311 "trsvcid": "4420" 00:25:05.311 }, 00:25:05.311 "peer_address": { 00:25:05.311 "trtype": "TCP", 00:25:05.311 "adrfam": "IPv4", 00:25:05.311 "traddr": "10.0.0.1", 00:25:05.311 "trsvcid": "36510" 00:25:05.311 }, 00:25:05.311 "auth": { 00:25:05.311 "state": "completed", 00:25:05.311 "digest": "sha512", 00:25:05.311 "dhgroup": "ffdhe4096" 00:25:05.311 } 00:25:05.311 } 00:25:05.311 ]' 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:05.311 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:05.570 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:05.570 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:07.476 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:07.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:07.476 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:07.476 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.477 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.477 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.477 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:07.477 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.477 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.477 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.414 00:25:08.414 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:08.414 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:08.414 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:08.982 { 00:25:08.982 "cntlid": 125, 00:25:08.982 "qid": 0, 00:25:08.982 "state": "enabled", 00:25:08.982 "thread": "nvmf_tgt_poll_group_000", 00:25:08.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:08.982 "listen_address": { 00:25:08.982 "trtype": "TCP", 00:25:08.982 "adrfam": "IPv4", 00:25:08.982 "traddr": "10.0.0.2", 00:25:08.982 "trsvcid": "4420" 00:25:08.982 }, 00:25:08.982 "peer_address": { 00:25:08.982 "trtype": "TCP", 00:25:08.982 "adrfam": "IPv4", 00:25:08.982 "traddr": "10.0.0.1", 00:25:08.982 "trsvcid": "40230" 00:25:08.982 }, 00:25:08.982 "auth": { 00:25:08.982 "state": "completed", 00:25:08.982 "digest": "sha512", 00:25:08.982 "dhgroup": "ffdhe4096" 00:25:08.982 } 00:25:08.982 } 00:25:08.982 ]' 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:08.982 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:09.919 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:09.919 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:11.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.294 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:11.553 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:12.491 00:25:12.492 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:12.492 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:12.492 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:12.750 { 00:25:12.750 "cntlid": 127, 00:25:12.750 "qid": 0, 00:25:12.750 "state": "enabled", 00:25:12.750 "thread": "nvmf_tgt_poll_group_000", 00:25:12.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:12.750 "listen_address": { 00:25:12.750 "trtype": "TCP", 00:25:12.750 "adrfam": "IPv4", 00:25:12.750 "traddr": "10.0.0.2", 00:25:12.750 "trsvcid": "4420" 00:25:12.750 }, 00:25:12.750 "peer_address": { 00:25:12.750 "trtype": "TCP", 00:25:12.750 "adrfam": "IPv4", 00:25:12.750 "traddr": "10.0.0.1", 00:25:12.750 "trsvcid": "40252" 00:25:12.750 }, 00:25:12.750 "auth": { 00:25:12.750 "state": "completed", 00:25:12.750 "digest": "sha512", 00:25:12.750 "dhgroup": "ffdhe4096" 00:25:12.750 } 00:25:12.750 } 00:25:12.750 ]' 00:25:12.750 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:13.009 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:13.576 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:13.576 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:15.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.482 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.098 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.059 00:25:17.059 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:17.059 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:17.059 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:17.318 { 00:25:17.318 "cntlid": 129, 00:25:17.318 "qid": 0, 00:25:17.318 "state": "enabled", 00:25:17.318 "thread": "nvmf_tgt_poll_group_000", 00:25:17.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:17.318 "listen_address": { 00:25:17.318 "trtype": "TCP", 00:25:17.318 "adrfam": "IPv4", 00:25:17.318 "traddr": "10.0.0.2", 00:25:17.318 "trsvcid": "4420" 00:25:17.318 }, 00:25:17.318 "peer_address": { 00:25:17.318 "trtype": "TCP", 00:25:17.318 "adrfam": "IPv4", 00:25:17.318 "traddr": "10.0.0.1", 00:25:17.318 "trsvcid": "56440" 00:25:17.318 }, 00:25:17.318 "auth": { 00:25:17.318 "state": "completed", 00:25:17.318 "digest": "sha512", 00:25:17.318 "dhgroup": "ffdhe6144" 00:25:17.318 } 00:25:17.318 } 00:25:17.318 ]' 00:25:17.318 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:17.318 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:17.318 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:17.576 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:17.576 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:17.576 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.576 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.576 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.834 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:17.834 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:19.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.210 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.786 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.726 00:25:20.726 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:20.726 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:20.726 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:21.293 { 00:25:21.293 "cntlid": 131, 00:25:21.293 "qid": 0, 00:25:21.293 "state": "enabled", 00:25:21.293 "thread": "nvmf_tgt_poll_group_000", 00:25:21.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:21.293 "listen_address": { 00:25:21.293 "trtype": "TCP", 00:25:21.293 "adrfam": "IPv4", 00:25:21.293 "traddr": "10.0.0.2", 00:25:21.293 "trsvcid": "4420" 00:25:21.293 }, 00:25:21.293 "peer_address": { 00:25:21.293 "trtype": "TCP", 00:25:21.293 "adrfam": "IPv4", 00:25:21.293 "traddr": "10.0.0.1", 00:25:21.293 "trsvcid": "56488" 00:25:21.293 }, 00:25:21.293 "auth": { 00:25:21.293 "state": "completed", 00:25:21.293 "digest": "sha512", 00:25:21.293 "dhgroup": "ffdhe6144" 00:25:21.293 } 00:25:21.293 } 00:25:21.293 ]' 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:21.293 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:21.293 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:21.293 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:21.551 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:21.551 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:21.551 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:21.808 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:21.808 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:24.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:24.343 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.602 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.541 00:25:25.541 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:25.541 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.541 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:26.109 { 00:25:26.109 "cntlid": 133, 00:25:26.109 "qid": 0, 00:25:26.109 "state": "enabled", 00:25:26.109 "thread": "nvmf_tgt_poll_group_000", 00:25:26.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:26.109 "listen_address": { 00:25:26.109 "trtype": "TCP", 00:25:26.109 "adrfam": "IPv4", 00:25:26.109 "traddr": "10.0.0.2", 00:25:26.109 "trsvcid": "4420" 00:25:26.109 }, 00:25:26.109 "peer_address": { 00:25:26.109 "trtype": "TCP", 00:25:26.109 "adrfam": "IPv4", 00:25:26.109 "traddr": "10.0.0.1", 00:25:26.109 "trsvcid": "56506" 00:25:26.109 }, 00:25:26.109 "auth": { 00:25:26.109 "state": "completed", 00:25:26.109 "digest": "sha512", 00:25:26.109 "dhgroup": "ffdhe6144" 00:25:26.109 } 00:25:26.109 } 00:25:26.109 ]' 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:26.109 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:26.369 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:26.369 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:27.745 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:28.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.004 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:28.571 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:29.140 00:25:29.140 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:29.140 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:29.140 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:30.079 { 00:25:30.079 "cntlid": 135, 00:25:30.079 "qid": 0, 00:25:30.079 "state": "enabled", 00:25:30.079 "thread": "nvmf_tgt_poll_group_000", 00:25:30.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:30.079 "listen_address": { 00:25:30.079 "trtype": "TCP", 00:25:30.079 "adrfam": "IPv4", 00:25:30.079 "traddr": "10.0.0.2", 00:25:30.079 "trsvcid": "4420" 00:25:30.079 }, 00:25:30.079 "peer_address": { 00:25:30.079 "trtype": "TCP", 00:25:30.079 "adrfam": "IPv4", 00:25:30.079 "traddr": "10.0.0.1", 00:25:30.079 "trsvcid": "48810" 00:25:30.079 }, 00:25:30.079 "auth": { 00:25:30.079 "state": "completed", 00:25:30.079 "digest": "sha512", 00:25:30.079 "dhgroup": "ffdhe6144" 00:25:30.079 } 00:25:30.079 } 00:25:30.079 ]' 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:30.079 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:30.080 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:30.080 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:30.080 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:31.021 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:31.021 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:32.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.927 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.831 00:25:34.831 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:34.831 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.831 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.088 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:35.088 { 00:25:35.088 "cntlid": 137, 00:25:35.088 "qid": 0, 00:25:35.088 "state": "enabled", 00:25:35.088 "thread": "nvmf_tgt_poll_group_000", 00:25:35.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:35.089 "listen_address": { 00:25:35.089 "trtype": "TCP", 00:25:35.089 "adrfam": "IPv4", 00:25:35.089 "traddr": "10.0.0.2", 00:25:35.089 "trsvcid": "4420" 00:25:35.089 }, 00:25:35.089 "peer_address": { 00:25:35.089 "trtype": "TCP", 00:25:35.089 "adrfam": "IPv4", 00:25:35.089 "traddr": "10.0.0.1", 00:25:35.089 "trsvcid": "48846" 00:25:35.089 }, 00:25:35.089 "auth": { 00:25:35.089 "state": "completed", 00:25:35.089 "digest": "sha512", 00:25:35.089 "dhgroup": "ffdhe8192" 00:25:35.089 } 00:25:35.089 } 00:25:35.089 ]' 00:25:35.089 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:35.089 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:35.089 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:35.346 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:35.346 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:35.346 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:35.346 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:35.346 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:35.604 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:35.605 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:37.507 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.508 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.074 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.976 00:25:39.976 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:40.234 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:40.234 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.800 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:40.800 { 00:25:40.800 "cntlid": 139, 00:25:40.800 "qid": 0, 00:25:40.801 "state": "enabled", 00:25:40.801 "thread": "nvmf_tgt_poll_group_000", 00:25:40.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:40.801 "listen_address": { 00:25:40.801 "trtype": "TCP", 00:25:40.801 "adrfam": "IPv4", 00:25:40.801 "traddr": "10.0.0.2", 00:25:40.801 "trsvcid": "4420" 00:25:40.801 }, 00:25:40.801 "peer_address": { 00:25:40.801 "trtype": "TCP", 00:25:40.801 "adrfam": "IPv4", 00:25:40.801 "traddr": "10.0.0.1", 00:25:40.801 "trsvcid": "42358" 00:25:40.801 }, 00:25:40.801 "auth": { 00:25:40.801 "state": "completed", 00:25:40.801 "digest": "sha512", 00:25:40.801 "dhgroup": "ffdhe8192" 00:25:40.801 } 00:25:40.801 } 00:25:40.801 ]' 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.801 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.368 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:41.368 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: --dhchap-ctrl-secret DHHC-1:02:YmM3M2U1MDViNjk5YjA5MGIzYzc5MjdjZjJiYjM5ZDQ3MWM0ZTU1Mzg2OGJmN2JhISRIxg==: 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:43.272 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.531 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.441 00:25:45.441 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:45.441 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:45.441 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:45.740 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.740 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:45.740 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.740 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:45.999 { 00:25:45.999 "cntlid": 141, 00:25:45.999 "qid": 0, 00:25:45.999 "state": "enabled", 00:25:45.999 "thread": "nvmf_tgt_poll_group_000", 00:25:45.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:45.999 "listen_address": { 00:25:45.999 "trtype": "TCP", 00:25:45.999 "adrfam": "IPv4", 00:25:45.999 "traddr": "10.0.0.2", 00:25:45.999 "trsvcid": "4420" 00:25:45.999 }, 00:25:45.999 "peer_address": { 00:25:45.999 "trtype": "TCP", 00:25:45.999 "adrfam": "IPv4", 00:25:45.999 "traddr": "10.0.0.1", 00:25:45.999 "trsvcid": "42380" 00:25:45.999 }, 00:25:45.999 "auth": { 00:25:45.999 "state": "completed", 00:25:45.999 "digest": "sha512", 00:25:45.999 "dhgroup": "ffdhe8192" 00:25:45.999 } 00:25:45.999 } 00:25:45.999 ]' 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:45.999 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:46.567 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:46.567 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:01:MjhlMWRhOTMyYWFhMDM0MzFiMmM1YWI2MWEwYzMzMjHQlib0: 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:48.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.469 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:49.036 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:49.037 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:50.938 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:50.938 { 00:25:50.938 "cntlid": 143, 00:25:50.938 "qid": 0, 00:25:50.938 "state": "enabled", 00:25:50.938 "thread": "nvmf_tgt_poll_group_000", 00:25:50.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:50.938 "listen_address": { 00:25:50.938 "trtype": "TCP", 00:25:50.938 "adrfam": "IPv4", 00:25:50.938 "traddr": "10.0.0.2", 00:25:50.938 "trsvcid": "4420" 00:25:50.938 }, 00:25:50.938 "peer_address": { 00:25:50.938 "trtype": "TCP", 00:25:50.938 "adrfam": "IPv4", 00:25:50.938 "traddr": "10.0.0.1", 00:25:50.938 "trsvcid": "50028" 00:25:50.938 }, 00:25:50.938 "auth": { 00:25:50.938 "state": "completed", 00:25:50.938 "digest": "sha512", 00:25:50.938 "dhgroup": "ffdhe8192" 00:25:50.938 } 00:25:50.938 } 00:25:50.938 ]' 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:50.938 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:51.196 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:51.196 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:51.196 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:51.453 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:51.453 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:52.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.824 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:25:53.083 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:25:53.083 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:25:53.083 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:53.083 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:53.083 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.342 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.718 00:25:54.718 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:54.718 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:54.718 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:54.976 { 00:25:54.976 "cntlid": 145, 00:25:54.976 "qid": 0, 00:25:54.976 "state": "enabled", 00:25:54.976 "thread": "nvmf_tgt_poll_group_000", 00:25:54.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:54.976 "listen_address": { 00:25:54.976 "trtype": "TCP", 00:25:54.976 "adrfam": "IPv4", 00:25:54.976 "traddr": "10.0.0.2", 00:25:54.976 "trsvcid": "4420" 00:25:54.976 }, 00:25:54.976 "peer_address": { 00:25:54.976 "trtype": "TCP", 00:25:54.976 "adrfam": "IPv4", 00:25:54.976 "traddr": "10.0.0.1", 00:25:54.976 "trsvcid": "50042" 00:25:54.976 }, 00:25:54.976 "auth": { 00:25:54.976 "state": "completed", 00:25:54.976 "digest": "sha512", 00:25:54.976 "dhgroup": "ffdhe8192" 00:25:54.976 } 00:25:54.976 } 00:25:54.976 ]' 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:54.976 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:54.977 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:54.977 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:54.977 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:54.977 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:55.544 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:55.544 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YjQ1ZjZmZWVmMTQ5NzViODUwOGIwYjBjN2IzMzMyMDc3NTA0MTU0NGVhMTM1MjdmJZoPbQ==: --dhchap-ctrl-secret DHHC-1:03:MDdjZDcyZDI0NDAwNTQzMWZiZTRjYzFiZGZkNzAwMzljMTZkYzJlNDRmZTJkYTYzMmU1YjUzNTQ2NjJjZDc3Ng0NZl4=: 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:56.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.918 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:25:56.919 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:25:57.853 request: 00:25:57.853 { 00:25:57.853 "name": "nvme0", 00:25:57.853 "trtype": "tcp", 00:25:57.853 "traddr": "10.0.0.2", 00:25:57.853 "adrfam": "ipv4", 00:25:57.853 "trsvcid": "4420", 00:25:57.853 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:57.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:57.853 "prchk_reftag": false, 00:25:57.853 "prchk_guard": false, 00:25:57.853 "hdgst": false, 00:25:57.853 "ddgst": false, 00:25:57.853 "dhchap_key": "key2", 00:25:57.853 "allow_unrecognized_csi": false, 00:25:57.853 "method": "bdev_nvme_attach_controller", 00:25:57.853 "req_id": 1 00:25:57.853 } 00:25:57.853 Got JSON-RPC error response 00:25:57.853 response: 00:25:57.853 { 00:25:57.853 "code": -5, 00:25:57.853 "message": "Input/output error" 00:25:57.853 } 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:57.853 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.757 request: 00:25:59.757 { 00:25:59.757 "name": "nvme0", 00:25:59.757 "trtype": "tcp", 00:25:59.757 "traddr": "10.0.0.2", 00:25:59.757 "adrfam": "ipv4", 00:25:59.757 "trsvcid": "4420", 00:25:59.757 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:59.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:59.757 "prchk_reftag": false, 00:25:59.757 "prchk_guard": false, 00:25:59.757 "hdgst": false, 00:25:59.757 "ddgst": false, 00:25:59.757 "dhchap_key": "key1", 00:25:59.757 "dhchap_ctrlr_key": "ckey2", 00:25:59.757 "allow_unrecognized_csi": false, 00:25:59.757 "method": "bdev_nvme_attach_controller", 00:25:59.757 "req_id": 1 00:25:59.757 } 00:25:59.757 Got JSON-RPC error response 00:25:59.757 response: 00:25:59.757 { 00:25:59.757 "code": -5, 00:25:59.757 "message": "Input/output error" 00:25:59.757 } 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.758 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.134 request: 00:26:01.134 { 00:26:01.134 "name": "nvme0", 00:26:01.134 "trtype": "tcp", 00:26:01.134 "traddr": "10.0.0.2", 00:26:01.134 "adrfam": "ipv4", 00:26:01.134 "trsvcid": "4420", 00:26:01.134 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:01.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:01.134 "prchk_reftag": false, 00:26:01.134 "prchk_guard": false, 00:26:01.134 "hdgst": false, 00:26:01.134 "ddgst": false, 00:26:01.134 "dhchap_key": "key1", 00:26:01.134 "dhchap_ctrlr_key": "ckey1", 00:26:01.134 "allow_unrecognized_csi": false, 00:26:01.134 "method": "bdev_nvme_attach_controller", 00:26:01.134 "req_id": 1 00:26:01.134 } 00:26:01.134 Got JSON-RPC error response 00:26:01.134 response: 00:26:01.134 { 00:26:01.134 "code": -5, 00:26:01.134 "message": "Input/output error" 00:26:01.134 } 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1710800 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1710800 ']' 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1710800 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1710800 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1710800' 00:26:01.134 killing process with pid 1710800 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1710800 00:26:01.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1710800 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1748388 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1748388 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1748388 ']' 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.392 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.650 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.650 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:26:01.650 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:01.650 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.650 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1748388 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1748388 ']' 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.908 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.167 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.167 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:26:02.167 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:26:02.167 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.167 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.426 null0 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NYo 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4a0 ]] 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4a0 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.426 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A5j 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.M5U ]] 00:26:02.426 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M5U 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TsI 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XC5 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XC5 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yc 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:02.427 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:04.957 nvme0n1 00:26:04.957 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:04.957 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:04.957 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:04.957 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:05.216 { 00:26:05.216 "cntlid": 1, 00:26:05.216 "qid": 0, 00:26:05.216 "state": "enabled", 00:26:05.216 "thread": "nvmf_tgt_poll_group_000", 00:26:05.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:05.216 "listen_address": { 00:26:05.216 "trtype": "TCP", 00:26:05.216 "adrfam": "IPv4", 00:26:05.216 "traddr": "10.0.0.2", 00:26:05.216 "trsvcid": "4420" 00:26:05.216 }, 00:26:05.216 "peer_address": { 00:26:05.216 "trtype": "TCP", 00:26:05.216 "adrfam": "IPv4", 00:26:05.216 "traddr": "10.0.0.1", 00:26:05.216 "trsvcid": "38400" 00:26:05.216 }, 00:26:05.216 "auth": { 00:26:05.216 "state": "completed", 00:26:05.216 "digest": "sha512", 00:26:05.216 "dhgroup": "ffdhe8192" 00:26:05.216 } 00:26:05.216 } 00:26:05.216 ]' 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:05.216 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:05.476 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:26:05.476 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:26:06.852 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:07.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:26:07.110 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:07.369 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:07.628 request: 00:26:07.628 { 00:26:07.628 "name": "nvme0", 00:26:07.628 "trtype": "tcp", 00:26:07.628 "traddr": "10.0.0.2", 00:26:07.628 "adrfam": "ipv4", 00:26:07.628 "trsvcid": "4420", 00:26:07.628 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:07.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:07.628 "prchk_reftag": false, 00:26:07.628 "prchk_guard": false, 00:26:07.628 "hdgst": false, 00:26:07.628 "ddgst": false, 00:26:07.628 "dhchap_key": "key3", 00:26:07.628 "allow_unrecognized_csi": false, 00:26:07.628 "method": "bdev_nvme_attach_controller", 00:26:07.628 "req_id": 1 00:26:07.628 } 00:26:07.628 Got JSON-RPC error response 00:26:07.628 response: 00:26:07.628 { 00:26:07.628 "code": -5, 00:26:07.628 "message": "Input/output error" 00:26:07.628 } 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:07.628 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:08.195 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:08.763 request: 00:26:08.763 { 00:26:08.763 "name": "nvme0", 00:26:08.763 "trtype": "tcp", 00:26:08.763 "traddr": "10.0.0.2", 00:26:08.763 "adrfam": "ipv4", 00:26:08.763 "trsvcid": "4420", 00:26:08.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:08.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:08.763 "prchk_reftag": false, 00:26:08.763 "prchk_guard": false, 00:26:08.763 "hdgst": false, 00:26:08.763 "ddgst": false, 00:26:08.763 "dhchap_key": "key3", 00:26:08.763 "allow_unrecognized_csi": false, 00:26:08.763 "method": "bdev_nvme_attach_controller", 00:26:08.764 "req_id": 1 00:26:08.764 } 00:26:08.764 Got JSON-RPC error response 00:26:08.764 response: 00:26:08.764 { 00:26:08.764 "code": -5, 00:26:08.764 "message": "Input/output error" 00:26:08.764 } 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:08.764 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:09.023 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:09.959 request: 00:26:09.959 { 00:26:09.959 "name": "nvme0", 00:26:09.959 "trtype": "tcp", 00:26:09.959 "traddr": "10.0.0.2", 00:26:09.959 "adrfam": "ipv4", 00:26:09.959 "trsvcid": "4420", 00:26:09.959 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:09.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:09.959 "prchk_reftag": false, 00:26:09.959 "prchk_guard": false, 00:26:09.959 "hdgst": false, 00:26:09.959 "ddgst": false, 00:26:09.959 "dhchap_key": "key0", 00:26:09.959 "dhchap_ctrlr_key": "key1", 00:26:09.959 "allow_unrecognized_csi": false, 00:26:09.959 "method": "bdev_nvme_attach_controller", 00:26:09.959 "req_id": 1 00:26:09.959 } 00:26:09.959 Got JSON-RPC error response 00:26:09.959 response: 00:26:09.959 { 00:26:09.959 "code": -5, 00:26:09.959 "message": "Input/output error" 00:26:09.959 } 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:26:10.217 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:26:10.784 nvme0n1 00:26:10.784 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:26:10.784 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:26:10.784 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:11.352 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.352 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:11.352 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:26:11.612 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:11.613 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:14.147 nvme0n1 00:26:14.147 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:26:14.147 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:26:14.147 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:26:14.407 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:14.682 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.682 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:26:14.682 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: --dhchap-ctrl-secret DHHC-1:03:MTE3OTk0MDI5MzhjMjM5Zjk2ZGIxZjE4MmU2MGEzYmQ1MDhhZGI4MjEwOTc1ODgzZjIwZDQ2NzcxZjEzMzgwNW0rPPU=: 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:16.642 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:16.901 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:18.805 request: 00:26:18.805 { 00:26:18.805 "name": "nvme0", 00:26:18.805 "trtype": "tcp", 00:26:18.805 "traddr": "10.0.0.2", 00:26:18.805 "adrfam": "ipv4", 00:26:18.805 "trsvcid": "4420", 00:26:18.805 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:18.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:18.805 "prchk_reftag": false, 00:26:18.805 "prchk_guard": false, 00:26:18.805 "hdgst": false, 00:26:18.805 "ddgst": false, 00:26:18.805 "dhchap_key": "key1", 00:26:18.805 "allow_unrecognized_csi": false, 00:26:18.805 "method": "bdev_nvme_attach_controller", 00:26:18.805 "req_id": 1 00:26:18.805 } 00:26:18.805 Got JSON-RPC error response 00:26:18.805 response: 00:26:18.805 { 00:26:18.805 "code": -5, 00:26:18.805 "message": "Input/output error" 00:26:18.805 } 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:18.805 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:22.098 nvme0n1 00:26:22.098 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:26:22.098 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:22.098 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:26:22.666 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.666 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:22.666 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:26:23.235 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:26:23.802 nvme0n1 00:26:23.802 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:26:23.802 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:26:23.802 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:24.369 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.369 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:24.369 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:24.936 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:24.936 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.936 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.936 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: '' 2s 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: ]] 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDZjY2NiN2Y5ZjNjZmYwNWVjZTVmMzE5OWEwZDU2ZDIpWrQM: 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:26:24.937 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: 2s 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: ]] 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzhhN2FiN2ZlOWMyNjQ5MjQ2N2I3OTE1MzhmMDk1YzJkMjFiYTAzMTE4M2FjYzczqyvPNg==: 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:26:26.842 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:29.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:29.376 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:31.911 nvme0n1 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:31.911 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:33.814 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:26:33.814 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:26:33.814 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:26:34.382 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:26:34.640 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:26:34.640 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:26:34.640 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:35.207 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:37.110 request: 00:26:37.110 { 00:26:37.110 "name": "nvme0", 00:26:37.110 "dhchap_key": "key1", 00:26:37.110 "dhchap_ctrlr_key": "key3", 00:26:37.110 "method": "bdev_nvme_set_keys", 00:26:37.110 "req_id": 1 00:26:37.110 } 00:26:37.110 Got JSON-RPC error response 00:26:37.110 response: 00:26:37.110 { 00:26:37.110 "code": -13, 00:26:37.110 "message": "Permission denied" 00:26:37.110 } 00:26:37.110 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:37.110 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:37.110 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:37.110 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:37.111 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:26:37.111 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:26:37.111 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:37.678 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:26:37.678 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:26:38.615 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:26:38.615 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:38.615 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:39.182 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:42.473 nvme0n1 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:42.473 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:43.852 request: 00:26:43.852 { 00:26:43.852 "name": "nvme0", 00:26:43.852 "dhchap_key": "key2", 00:26:43.852 "dhchap_ctrlr_key": "key0", 00:26:43.852 "method": "bdev_nvme_set_keys", 00:26:43.852 "req_id": 1 00:26:43.852 } 00:26:43.852 Got JSON-RPC error response 00:26:43.852 response: 00:26:43.852 { 00:26:43.852 "code": -13, 00:26:43.852 "message": "Permission denied" 00:26:43.852 } 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:43.852 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:26:44.788 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:26:44.788 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:26:45.724 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:26:45.724 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:26:45.724 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1710831 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1710831 ']' 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1710831 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1710831 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1710831' 00:26:46.291 killing process with pid 1710831 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1710831 00:26:46.291 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1710831 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.858 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.858 rmmod nvme_tcp 00:26:46.858 rmmod nvme_fabrics 00:26:47.118 rmmod nvme_keyring 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1748388 ']' 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1748388 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1748388 ']' 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1748388 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1748388 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1748388' 00:26:47.118 killing process with pid 1748388 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1748388 00:26:47.118 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1748388 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.723 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NYo /tmp/spdk.key-sha256.A5j /tmp/spdk.key-sha384.TsI /tmp/spdk.key-sha512.8Yc /tmp/spdk.key-sha512.4a0 /tmp/spdk.key-sha384.M5U /tmp/spdk.key-sha256.XC5 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:26:49.626 00:26:49.626 real 6m16.755s 00:26:49.626 user 14m40.577s 00:26:49.626 sys 0m42.688s 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.626 ************************************ 00:26:49.626 END TEST nvmf_auth_target 00:26:49.626 ************************************ 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:49.626 ************************************ 00:26:49.626 START TEST nvmf_bdevio_no_huge 00:26:49.626 ************************************ 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:49.626 * Looking for test storage... 00:26:49.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:26:49.626 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.886 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:49.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.886 --rc genhtml_branch_coverage=1 00:26:49.886 --rc genhtml_function_coverage=1 00:26:49.886 --rc genhtml_legend=1 00:26:49.886 --rc geninfo_all_blocks=1 00:26:49.886 --rc geninfo_unexecuted_blocks=1 00:26:49.887 00:26:49.887 ' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:49.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.887 --rc genhtml_branch_coverage=1 00:26:49.887 --rc genhtml_function_coverage=1 00:26:49.887 --rc genhtml_legend=1 00:26:49.887 --rc geninfo_all_blocks=1 00:26:49.887 --rc geninfo_unexecuted_blocks=1 00:26:49.887 00:26:49.887 ' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:49.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.887 --rc genhtml_branch_coverage=1 00:26:49.887 --rc genhtml_function_coverage=1 00:26:49.887 --rc genhtml_legend=1 00:26:49.887 --rc geninfo_all_blocks=1 00:26:49.887 --rc geninfo_unexecuted_blocks=1 00:26:49.887 00:26:49.887 ' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:49.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.887 --rc genhtml_branch_coverage=1 00:26:49.887 --rc genhtml_function_coverage=1 00:26:49.887 --rc genhtml_legend=1 00:26:49.887 --rc geninfo_all_blocks=1 00:26:49.887 --rc geninfo_unexecuted_blocks=1 00:26:49.887 00:26:49.887 ' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.887 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:53.180 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:53.180 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:53.180 Found net devices under 0000:84:00.0: cvl_0_0 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:53.180 Found net devices under 0000:84:00.1: cvl_0_1 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.180 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:53.181 00:26:53.181 --- 10.0.0.2 ping statistics --- 00:26:53.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.181 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:53.181 00:26:53.181 --- 10.0.0.1 ping statistics --- 00:26:53.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.181 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1756170 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1756170 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1756170 ']' 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.181 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.181 [2024-10-08 20:54:21.815486] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:26:53.181 [2024-10-08 20:54:21.815585] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:26:53.181 [2024-10-08 20:54:21.901313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.439 [2024-10-08 20:54:22.030814] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.439 [2024-10-08 20:54:22.030888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.439 [2024-10-08 20:54:22.030905] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.439 [2024-10-08 20:54:22.030920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.439 [2024-10-08 20:54:22.030931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.439 [2024-10-08 20:54:22.032205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.439 [2024-10-08 20:54:22.032274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:26:53.439 [2024-10-08 20:54:22.032333] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.439 [2024-10-08 20:54:22.032329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.439 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.439 [2024-10-08 20:54:22.200183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.700 Malloc0 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:53.700 [2024-10-08 20:54:22.238487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:53.700 { 00:26:53.700 "params": { 00:26:53.700 "name": "Nvme$subsystem", 00:26:53.700 "trtype": "$TEST_TRANSPORT", 00:26:53.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.700 "adrfam": "ipv4", 00:26:53.700 "trsvcid": "$NVMF_PORT", 00:26:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.700 "hdgst": ${hdgst:-false}, 00:26:53.700 "ddgst": ${ddgst:-false} 00:26:53.700 }, 00:26:53.700 "method": "bdev_nvme_attach_controller" 00:26:53.700 } 00:26:53.700 EOF 00:26:53.700 )") 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:26:53.700 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:53.700 "params": { 00:26:53.700 "name": "Nvme1", 00:26:53.700 "trtype": "tcp", 00:26:53.700 "traddr": "10.0.0.2", 00:26:53.700 "adrfam": "ipv4", 00:26:53.700 "trsvcid": "4420", 00:26:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.700 "hdgst": false, 00:26:53.700 "ddgst": false 00:26:53.700 }, 00:26:53.700 "method": "bdev_nvme_attach_controller" 00:26:53.700 }' 00:26:53.700 [2024-10-08 20:54:22.334060] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:26:53.700 [2024-10-08 20:54:22.334232] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1756316 ] 00:26:53.962 [2024-10-08 20:54:22.464695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.962 [2024-10-08 20:54:22.583609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.962 [2024-10-08 20:54:22.583683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.962 [2024-10-08 20:54:22.583688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.221 I/O targets: 00:26:54.221 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:54.221 00:26:54.221 00:26:54.221 CUnit - A unit testing framework for C - Version 2.1-3 00:26:54.221 http://cunit.sourceforge.net/ 00:26:54.221 00:26:54.221 00:26:54.221 Suite: bdevio tests on: Nvme1n1 00:26:54.221 Test: blockdev write read block ...passed 00:26:54.221 Test: blockdev write zeroes read block ...passed 00:26:54.221 Test: blockdev write zeroes read no split ...passed 00:26:54.221 Test: blockdev write zeroes read split ...passed 00:26:54.221 Test: blockdev write zeroes read split partial ...passed 00:26:54.221 Test: blockdev reset ...[2024-10-08 20:54:22.936800] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.221 [2024-10-08 20:54:22.936910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efd40 (9): Bad file descriptor 00:26:54.481 [2024-10-08 20:54:23.033264] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.481 passed 00:26:54.481 Test: blockdev write read 8 blocks ...passed 00:26:54.481 Test: blockdev write read size > 128k ...passed 00:26:54.481 Test: blockdev write read invalid size ...passed 00:26:54.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:54.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:54.481 Test: blockdev write read max offset ...passed 00:26:54.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:54.481 Test: blockdev writev readv 8 blocks ...passed 00:26:54.481 Test: blockdev writev readv 30 x 1block ...passed 00:26:54.741 Test: blockdev writev readv block ...passed 00:26:54.741 Test: blockdev writev readv size > 128k ...passed 00:26:54.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:54.741 Test: blockdev comparev and writev ...[2024-10-08 20:54:23.286562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.286600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.286625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.286643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.287097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.287124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.287146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.287164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.287615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.287640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.287669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.287693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.288134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.288159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.288181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:54.741 [2024-10-08 20:54:23.288197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.741 passed 00:26:54.741 Test: blockdev nvme passthru rw ...passed 00:26:54.741 Test: blockdev nvme passthru vendor specific ...[2024-10-08 20:54:23.370033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:54.741 [2024-10-08 20:54:23.370061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.370208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:54.741 [2024-10-08 20:54:23.370231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.370371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:54.741 [2024-10-08 20:54:23.370393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.741 [2024-10-08 20:54:23.370532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:54.741 [2024-10-08 20:54:23.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.741 passed 00:26:54.741 Test: blockdev nvme admin passthru ...passed 00:26:54.741 Test: blockdev copy ...passed 00:26:54.741 00:26:54.741 Run Summary: Type Total Ran Passed Failed Inactive 00:26:54.741 suites 1 1 n/a 0 0 00:26:54.741 tests 23 23 23 0 0 00:26:54.741 asserts 152 152 152 0 n/a 00:26:54.741 00:26:54.741 Elapsed time = 1.228 seconds 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.313 rmmod nvme_tcp 00:26:55.313 rmmod nvme_fabrics 00:26:55.313 rmmod nvme_keyring 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1756170 ']' 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1756170 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1756170 ']' 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1756170 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1756170 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1756170' 00:26:55.313 killing process with pid 1756170 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1756170 00:26:55.313 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1756170 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.883 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.791 00:26:57.791 real 0m8.146s 00:26:57.791 user 0m12.589s 00:26:57.791 sys 0m3.678s 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:57.791 ************************************ 00:26:57.791 END TEST nvmf_bdevio_no_huge 00:26:57.791 ************************************ 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:57.791 ************************************ 00:26:57.791 START TEST nvmf_tls 00:26:57.791 ************************************ 00:26:57.791 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:58.049 * Looking for test storage... 00:26:58.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.049 --rc genhtml_branch_coverage=1 00:26:58.049 --rc genhtml_function_coverage=1 00:26:58.049 --rc genhtml_legend=1 00:26:58.049 --rc geninfo_all_blocks=1 00:26:58.049 --rc geninfo_unexecuted_blocks=1 00:26:58.049 00:26:58.049 ' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.049 --rc genhtml_branch_coverage=1 00:26:58.049 --rc genhtml_function_coverage=1 00:26:58.049 --rc genhtml_legend=1 00:26:58.049 --rc geninfo_all_blocks=1 00:26:58.049 --rc geninfo_unexecuted_blocks=1 00:26:58.049 00:26:58.049 ' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.049 --rc genhtml_branch_coverage=1 00:26:58.049 --rc genhtml_function_coverage=1 00:26:58.049 --rc genhtml_legend=1 00:26:58.049 --rc geninfo_all_blocks=1 00:26:58.049 --rc geninfo_unexecuted_blocks=1 00:26:58.049 00:26:58.049 ' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.049 --rc genhtml_branch_coverage=1 00:26:58.049 --rc genhtml_function_coverage=1 00:26:58.049 --rc genhtml_legend=1 00:26:58.049 --rc geninfo_all_blocks=1 00:26:58.049 --rc geninfo_unexecuted_blocks=1 00:26:58.049 00:26:58.049 ' 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.049 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.050 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.308 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:58.308 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.309 20:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.600 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:01.601 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:01.601 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:01.601 Found net devices under 0000:84:00.0: cvl_0_0 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:01.601 Found net devices under 0000:84:00.1: cvl_0_1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.601 20:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:27:01.601 00:27:01.601 --- 10.0.0.2 ping statistics --- 00:27:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.601 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:27:01.601 00:27:01.601 --- 10.0.0.1 ping statistics --- 00:27:01.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.601 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1758539 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1758539 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1758539 ']' 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.601 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.601 [2024-10-08 20:54:30.188746] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:01.601 [2024-10-08 20:54:30.188837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.601 [2024-10-08 20:54:30.266313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.861 [2024-10-08 20:54:30.377780] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.861 [2024-10-08 20:54:30.377836] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.861 [2024-10-08 20:54:30.377851] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.861 [2024-10-08 20:54:30.377861] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.861 [2024-10-08 20:54:30.377871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.861 [2024-10-08 20:54:30.378493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:27:01.861 20:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:27:02.433 true 00:27:02.693 20:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:02.693 20:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:27:03.263 20:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:27:03.263 20:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:27:03.263 20:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:27:03.521 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:27:03.521 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:04.088 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:27:04.088 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:27:04.088 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:27:04.347 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:04.347 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:27:04.914 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:27:04.914 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:27:04.914 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:04.914 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:27:05.849 20:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:27:05.849 20:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:27:05.849 20:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:27:06.417 20:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:27:06.417 20:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:06.985 20:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:27:06.985 20:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:27:06.985 20:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:27:07.551 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:07.551 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.UVrLwgOaQA 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.7UsTzPuVZ9 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UVrLwgOaQA 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.7UsTzPuVZ9 00:27:07.810 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:27:08.377 20:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:27:08.636 20:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.UVrLwgOaQA 00:27:08.636 20:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UVrLwgOaQA 00:27:08.636 20:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:09.574 [2024-10-08 20:54:38.035840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.574 20:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:10.142 20:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:10.709 [2024-10-08 20:54:39.287531] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:10.709 [2024-10-08 20:54:39.288042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.709 20:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:10.968 malloc0 00:27:11.226 20:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:11.485 20:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UVrLwgOaQA 00:27:11.743 20:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:27:12.680 20:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UVrLwgOaQA 00:27:22.706 Initializing NVMe Controllers 00:27:22.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:22.706 Initialization complete. Launching workers. 00:27:22.706 ======================================================== 00:27:22.706 Latency(us) 00:27:22.706 Device Information : IOPS MiB/s Average min max 00:27:22.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4072.33 15.91 15727.92 2192.40 21737.79 00:27:22.706 ======================================================== 00:27:22.706 Total : 4072.33 15.91 15727.92 2192.40 21737.79 00:27:22.706 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UVrLwgOaQA 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UVrLwgOaQA 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1760954 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1760954 /var/tmp/bdevperf.sock 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1760954 ']' 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.706 20:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:22.706 [2024-10-08 20:54:51.452683] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:22.706 [2024-10-08 20:54:51.452793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760954 ] 00:27:22.965 [2024-10-08 20:54:51.557148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.224 [2024-10-08 20:54:51.792470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.484 20:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.484 20:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:23.484 20:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UVrLwgOaQA 00:27:24.052 20:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:24.619 [2024-10-08 20:54:53.107009] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:24.619 TLSTESTn1 00:27:24.619 20:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:24.619 Running I/O for 10 seconds... 00:27:26.930 1550.00 IOPS, 6.05 MiB/s [2024-10-08T18:54:56.628Z] 1565.50 IOPS, 6.12 MiB/s [2024-10-08T18:54:57.562Z] 1540.00 IOPS, 6.02 MiB/s [2024-10-08T18:54:58.501Z] 1544.00 IOPS, 6.03 MiB/s [2024-10-08T18:54:59.436Z] 1539.00 IOPS, 6.01 MiB/s [2024-10-08T18:55:00.371Z] 1527.00 IOPS, 5.96 MiB/s [2024-10-08T18:55:01.745Z] 1543.14 IOPS, 6.03 MiB/s [2024-10-08T18:55:02.679Z] 1534.38 IOPS, 5.99 MiB/s [2024-10-08T18:55:03.614Z] 1528.78 IOPS, 5.97 MiB/s [2024-10-08T18:55:03.614Z] 1522.70 IOPS, 5.95 MiB/s 00:27:34.851 Latency(us) 00:27:34.851 [2024-10-08T18:55:03.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:34.851 Verification LBA range: start 0x0 length 0x2000 00:27:34.851 TLSTESTn1 : 10.04 1528.46 5.97 0.00 0.00 83517.06 13883.92 60196.03 00:27:34.851 [2024-10-08T18:55:03.614Z] =================================================================================================================== 00:27:34.851 [2024-10-08T18:55:03.614Z] Total : 1528.46 5.97 0.00 0.00 83517.06 13883.92 60196.03 00:27:34.851 { 00:27:34.851 "results": [ 00:27:34.851 { 00:27:34.851 "job": "TLSTESTn1", 00:27:34.851 "core_mask": "0x4", 00:27:34.851 "workload": "verify", 00:27:34.851 "status": "finished", 00:27:34.851 "verify_range": { 00:27:34.851 "start": 0, 00:27:34.851 "length": 8192 00:27:34.851 }, 00:27:34.851 "queue_depth": 128, 00:27:34.851 "io_size": 4096, 00:27:34.851 "runtime": 10.043459, 00:27:34.851 "iops": 1528.457476652217, 00:27:34.851 "mibps": 5.970537018172723, 00:27:34.851 "io_failed": 0, 00:27:34.851 "io_timeout": 0, 00:27:34.851 "avg_latency_us": 83517.0625936783, 00:27:34.851 "min_latency_us": 13883.922962962963, 00:27:34.851 "max_latency_us": 60196.02962962963 00:27:34.851 } 00:27:34.851 ], 00:27:34.851 "core_count": 1 00:27:34.851 } 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1760954 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1760954 ']' 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1760954 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1760954 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1760954' 00:27:34.851 killing process with pid 1760954 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1760954 00:27:34.851 Received shutdown signal, test time was about 10.000000 seconds 00:27:34.851 00:27:34.851 Latency(us) 00:27:34.851 [2024-10-08T18:55:03.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.851 [2024-10-08T18:55:03.614Z] =================================================================================================================== 00:27:34.851 [2024-10-08T18:55:03.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.851 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1760954 00:27:35.110 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7UsTzPuVZ9 00:27:35.110 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:27:35.110 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7UsTzPuVZ9 00:27:35.110 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7UsTzPuVZ9 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7UsTzPuVZ9 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1762403 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1762403 /var/tmp/bdevperf.sock 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1762403 ']' 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.369 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:35.369 [2024-10-08 20:55:03.977036] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:35.369 [2024-10-08 20:55:03.977203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762403 ] 00:27:35.369 [2024-10-08 20:55:04.115596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.628 [2024-10-08 20:55:04.326065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.007 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.007 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:37.007 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7UsTzPuVZ9 00:27:37.267 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:38.207 [2024-10-08 20:55:06.612690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:38.207 [2024-10-08 20:55:06.626831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:38.207 [2024-10-08 20:55:06.627518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f9e0 (107): Transport endpoint is not connected 00:27:38.207 [2024-10-08 20:55:06.628493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f9e0 (9): Bad file descriptor 00:27:38.207 [2024-10-08 20:55:06.629486] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.207 [2024-10-08 20:55:06.629562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:38.207 [2024-10-08 20:55:06.629598] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:27:38.207 [2024-10-08 20:55:06.629667] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.207 request: 00:27:38.207 { 00:27:38.207 "name": "TLSTEST", 00:27:38.207 "trtype": "tcp", 00:27:38.207 "traddr": "10.0.0.2", 00:27:38.207 "adrfam": "ipv4", 00:27:38.207 "trsvcid": "4420", 00:27:38.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.207 "prchk_reftag": false, 00:27:38.207 "prchk_guard": false, 00:27:38.207 "hdgst": false, 00:27:38.207 "ddgst": false, 00:27:38.207 "psk": "key0", 00:27:38.207 "allow_unrecognized_csi": false, 00:27:38.207 "method": "bdev_nvme_attach_controller", 00:27:38.207 "req_id": 1 00:27:38.207 } 00:27:38.207 Got JSON-RPC error response 00:27:38.207 response: 00:27:38.207 { 00:27:38.207 "code": -5, 00:27:38.207 "message": "Input/output error" 00:27:38.207 } 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1762403 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1762403 ']' 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1762403 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762403 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762403' 00:27:38.207 killing process with pid 1762403 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1762403 00:27:38.207 Received shutdown signal, test time was about 10.000000 seconds 00:27:38.207 00:27:38.207 Latency(us) 00:27:38.207 [2024-10-08T18:55:06.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.207 [2024-10-08T18:55:06.970Z] =================================================================================================================== 00:27:38.207 [2024-10-08T18:55:06.970Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:38.207 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1762403 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UVrLwgOaQA 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UVrLwgOaQA 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UVrLwgOaQA 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UVrLwgOaQA 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1762801 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1762801 /var/tmp/bdevperf.sock 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1762801 ']' 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.466 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:38.466 [2024-10-08 20:55:07.161592] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:38.466 [2024-10-08 20:55:07.161713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762801 ] 00:27:38.726 [2024-10-08 20:55:07.266238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.726 [2024-10-08 20:55:07.476098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.665 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:39.665 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:39.665 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UVrLwgOaQA 00:27:40.235 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:27:40.806 [2024-10-08 20:55:09.450036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:40.806 [2024-10-08 20:55:09.464519] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:40.806 [2024-10-08 20:55:09.464596] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:40.806 [2024-10-08 20:55:09.464718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:40.806 [2024-10-08 20:55:09.465235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0e9e0 (107): Transport endpoint is not connected 00:27:40.806 [2024-10-08 20:55:09.466212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0e9e0 (9): Bad file descriptor 00:27:40.806 [2024-10-08 20:55:09.467204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.806 [2024-10-08 20:55:09.467265] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:40.806 [2024-10-08 20:55:09.467300] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:27:40.806 [2024-10-08 20:55:09.467361] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.806 request: 00:27:40.806 { 00:27:40.806 "name": "TLSTEST", 00:27:40.806 "trtype": "tcp", 00:27:40.806 "traddr": "10.0.0.2", 00:27:40.806 "adrfam": "ipv4", 00:27:40.806 "trsvcid": "4420", 00:27:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.806 "prchk_reftag": false, 00:27:40.806 "prchk_guard": false, 00:27:40.806 "hdgst": false, 00:27:40.806 "ddgst": false, 00:27:40.806 "psk": "key0", 00:27:40.806 "allow_unrecognized_csi": false, 00:27:40.806 "method": "bdev_nvme_attach_controller", 00:27:40.806 "req_id": 1 00:27:40.806 } 00:27:40.806 Got JSON-RPC error response 00:27:40.806 response: 00:27:40.806 { 00:27:40.806 "code": -5, 00:27:40.806 "message": "Input/output error" 00:27:40.806 } 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1762801 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1762801 ']' 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1762801 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762801 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762801' 00:27:40.806 killing process with pid 1762801 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1762801 00:27:40.806 Received shutdown signal, test time was about 10.000000 seconds 00:27:40.806 00:27:40.806 Latency(us) 00:27:40.806 [2024-10-08T18:55:09.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.806 [2024-10-08T18:55:09.569Z] =================================================================================================================== 00:27:40.806 [2024-10-08T18:55:09.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:40.806 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1762801 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UVrLwgOaQA 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UVrLwgOaQA 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UVrLwgOaQA 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UVrLwgOaQA 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1763079 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1763079 /var/tmp/bdevperf.sock 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1763079 ']' 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.377 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:41.377 [2024-10-08 20:55:09.990806] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:41.377 [2024-10-08 20:55:09.990916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763079 ] 00:27:41.377 [2024-10-08 20:55:10.098733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.637 [2024-10-08 20:55:10.320694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.897 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.897 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:41.897 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UVrLwgOaQA 00:27:42.465 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:42.723 [2024-10-08 20:55:11.469870] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:42.723 [2024-10-08 20:55:11.479927] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:42.723 [2024-10-08 20:55:11.479987] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:42.723 [2024-10-08 20:55:11.480083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:42.723 [2024-10-08 20:55:11.480896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14379e0 (107): Transport endpoint is not connected 00:27:42.723 [2024-10-08 20:55:11.481883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14379e0 (9): Bad file descriptor 00:27:42.723 [2024-10-08 20:55:11.482881] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:42.723 [2024-10-08 20:55:11.482911] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:42.723 [2024-10-08 20:55:11.482928] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:27:42.723 [2024-10-08 20:55:11.482950] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:42.723 request: 00:27:42.723 { 00:27:42.723 "name": "TLSTEST", 00:27:42.723 "trtype": "tcp", 00:27:42.723 "traddr": "10.0.0.2", 00:27:42.723 "adrfam": "ipv4", 00:27:42.723 "trsvcid": "4420", 00:27:42.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.723 "prchk_reftag": false, 00:27:42.723 "prchk_guard": false, 00:27:42.723 "hdgst": false, 00:27:42.723 "ddgst": false, 00:27:42.723 "psk": "key0", 00:27:42.723 "allow_unrecognized_csi": false, 00:27:42.723 "method": "bdev_nvme_attach_controller", 00:27:42.723 "req_id": 1 00:27:42.723 } 00:27:42.723 Got JSON-RPC error response 00:27:42.723 response: 00:27:42.723 { 00:27:42.723 "code": -5, 00:27:42.723 "message": "Input/output error" 00:27:42.723 } 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1763079 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1763079 ']' 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1763079 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763079 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763079' 00:27:43.071 killing process with pid 1763079 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1763079 00:27:43.071 Received shutdown signal, test time was about 10.000000 seconds 00:27:43.071 00:27:43.071 Latency(us) 00:27:43.071 [2024-10-08T18:55:11.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.071 [2024-10-08T18:55:11.834Z] =================================================================================================================== 00:27:43.071 [2024-10-08T18:55:11.834Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:43.071 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1763079 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1763349 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1763349 /var/tmp/bdevperf.sock 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1763349 ']' 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.333 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:43.333 [2024-10-08 20:55:11.998672] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:43.333 [2024-10-08 20:55:11.998796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763349 ] 00:27:43.592 [2024-10-08 20:55:12.113632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.592 [2024-10-08 20:55:12.333715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.850 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.850 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:43.850 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:27:44.109 [2024-10-08 20:55:12.857111] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:27:44.109 [2024-10-08 20:55:12.857212] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:44.109 request: 00:27:44.109 { 00:27:44.109 "name": "key0", 00:27:44.109 "path": "", 00:27:44.109 "method": "keyring_file_add_key", 00:27:44.109 "req_id": 1 00:27:44.109 } 00:27:44.109 Got JSON-RPC error response 00:27:44.109 response: 00:27:44.109 { 00:27:44.109 "code": -1, 00:27:44.109 "message": "Operation not permitted" 00:27:44.109 } 00:27:44.369 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:44.937 [2024-10-08 20:55:13.519233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.937 [2024-10-08 20:55:13.519349] bdev_nvme.c:6495:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:27:44.937 request: 00:27:44.937 { 00:27:44.937 "name": "TLSTEST", 00:27:44.937 "trtype": "tcp", 00:27:44.937 "traddr": "10.0.0.2", 00:27:44.937 "adrfam": "ipv4", 00:27:44.937 "trsvcid": "4420", 00:27:44.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.937 "prchk_reftag": false, 00:27:44.937 "prchk_guard": false, 00:27:44.937 "hdgst": false, 00:27:44.937 "ddgst": false, 00:27:44.937 "psk": "key0", 00:27:44.937 "allow_unrecognized_csi": false, 00:27:44.937 "method": "bdev_nvme_attach_controller", 00:27:44.937 "req_id": 1 00:27:44.937 } 00:27:44.937 Got JSON-RPC error response 00:27:44.937 response: 00:27:44.937 { 00:27:44.937 "code": -126, 00:27:44.937 "message": "Required key not available" 00:27:44.937 } 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1763349 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1763349 ']' 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1763349 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763349 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763349' 00:27:44.937 killing process with pid 1763349 00:27:44.937 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1763349 00:27:44.937 Received shutdown signal, test time was about 10.000000 seconds 00:27:44.937 00:27:44.937 Latency(us) 00:27:44.937 [2024-10-08T18:55:13.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.938 [2024-10-08T18:55:13.701Z] =================================================================================================================== 00:27:44.938 [2024-10-08T18:55:13.701Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:44.938 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1763349 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1758539 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1758539 ']' 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1758539 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:45.506 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1758539 00:27:45.506 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:45.506 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:45.506 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1758539' 00:27:45.506 killing process with pid 1758539 00:27:45.506 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1758539 00:27:45.506 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1758539 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:27:45.766 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.N0kPLRkMVb 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.N0kPLRkMVb 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1763630 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1763630 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1763630 ']' 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.026 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:46.026 [2024-10-08 20:55:14.712310] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:46.026 [2024-10-08 20:55:14.712488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.285 [2024-10-08 20:55:14.875972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.545 [2024-10-08 20:55:15.097260] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.545 [2024-10-08 20:55:15.097367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.545 [2024-10-08 20:55:15.097403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.545 [2024-10-08 20:55:15.097433] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.545 [2024-10-08 20:55:15.097461] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.545 [2024-10-08 20:55:15.098830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.N0kPLRkMVb 00:27:46.804 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:47.062 [2024-10-08 20:55:15.760900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.062 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:47.629 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:47.888 [2024-10-08 20:55:16.463888] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:47.888 [2024-10-08 20:55:16.464353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.888 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:48.458 malloc0 00:27:48.717 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:48.976 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:27:49.543 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0kPLRkMVb 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.N0kPLRkMVb 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1764060 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1764060 /var/tmp/bdevperf.sock 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1764060 ']' 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.802 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:50.061 [2024-10-08 20:55:18.614154] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:50.061 [2024-10-08 20:55:18.614262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764060 ] 00:27:50.061 [2024-10-08 20:55:18.711958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.320 [2024-10-08 20:55:18.930844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.579 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.579 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:27:50.579 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:27:51.149 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:51.719 [2024-10-08 20:55:20.287857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:51.719 TLSTESTn1 00:27:51.719 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:51.978 Running I/O for 10 seconds... 00:27:54.300 1481.00 IOPS, 5.79 MiB/s [2024-10-08T18:55:24.016Z] 1525.50 IOPS, 5.96 MiB/s [2024-10-08T18:55:24.959Z] 1515.00 IOPS, 5.92 MiB/s [2024-10-08T18:55:25.899Z] 1813.00 IOPS, 7.08 MiB/s [2024-10-08T18:55:26.842Z] 1823.20 IOPS, 7.12 MiB/s [2024-10-08T18:55:27.783Z] 1769.83 IOPS, 6.91 MiB/s [2024-10-08T18:55:28.724Z] 1730.00 IOPS, 6.76 MiB/s [2024-10-08T18:55:30.103Z] 1699.25 IOPS, 6.64 MiB/s [2024-10-08T18:55:31.041Z] 1849.56 IOPS, 7.22 MiB/s [2024-10-08T18:55:31.041Z] 1839.00 IOPS, 7.18 MiB/s 00:28:02.278 Latency(us) 00:28:02.278 [2024-10-08T18:55:31.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.278 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:02.278 Verification LBA range: start 0x0 length 0x2000 00:28:02.278 TLSTESTn1 : 10.03 1845.56 7.21 0.00 0.00 69197.95 13495.56 60196.03 00:28:02.278 [2024-10-08T18:55:31.041Z] =================================================================================================================== 00:28:02.278 [2024-10-08T18:55:31.041Z] Total : 1845.56 7.21 0.00 0.00 69197.95 13495.56 60196.03 00:28:02.278 { 00:28:02.278 "results": [ 00:28:02.278 { 00:28:02.278 "job": "TLSTESTn1", 00:28:02.278 "core_mask": "0x4", 00:28:02.278 "workload": "verify", 00:28:02.278 "status": "finished", 00:28:02.278 "verify_range": { 00:28:02.278 "start": 0, 00:28:02.278 "length": 8192 00:28:02.278 }, 00:28:02.278 "queue_depth": 128, 00:28:02.278 "io_size": 4096, 00:28:02.278 "runtime": 10.033264, 00:28:02.278 "iops": 1845.5609261353036, 00:28:02.278 "mibps": 7.20922236771603, 00:28:02.278 "io_failed": 0, 00:28:02.278 "io_timeout": 0, 00:28:02.278 "avg_latency_us": 69197.9538866187, 00:28:02.278 "min_latency_us": 13495.561481481482, 00:28:02.278 "max_latency_us": 60196.02962962963 00:28:02.278 } 00:28:02.278 ], 00:28:02.278 "core_count": 1 00:28:02.278 } 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1764060 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1764060 ']' 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1764060 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1764060 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1764060' 00:28:02.278 killing process with pid 1764060 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1764060 00:28:02.278 Received shutdown signal, test time was about 10.000000 seconds 00:28:02.278 00:28:02.278 Latency(us) 00:28:02.278 [2024-10-08T18:55:31.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.278 [2024-10-08T18:55:31.041Z] =================================================================================================================== 00:28:02.278 [2024-10-08T18:55:31.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.278 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1764060 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.N0kPLRkMVb 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0kPLRkMVb 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0kPLRkMVb 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:28:02.537 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0kPLRkMVb 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.N0kPLRkMVb 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1765494 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1765494 /var/tmp/bdevperf.sock 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1765494 ']' 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.538 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:02.538 [2024-10-08 20:55:31.153625] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:02.538 [2024-10-08 20:55:31.153740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765494 ] 00:28:02.538 [2024-10-08 20:55:31.219021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.797 [2024-10-08 20:55:31.333023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.057 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.057 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:03.057 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:03.628 [2024-10-08 20:55:32.130614] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.N0kPLRkMVb': 0100666 00:28:03.628 [2024-10-08 20:55:32.130728] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:03.628 request: 00:28:03.628 { 00:28:03.628 "name": "key0", 00:28:03.628 "path": "/tmp/tmp.N0kPLRkMVb", 00:28:03.628 "method": "keyring_file_add_key", 00:28:03.628 "req_id": 1 00:28:03.628 } 00:28:03.628 Got JSON-RPC error response 00:28:03.628 response: 00:28:03.628 { 00:28:03.628 "code": -1, 00:28:03.628 "message": "Operation not permitted" 00:28:03.628 } 00:28:03.628 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:03.887 [2024-10-08 20:55:32.636268] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:03.887 [2024-10-08 20:55:32.636388] bdev_nvme.c:6495:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:28:03.887 request: 00:28:03.887 { 00:28:03.887 "name": "TLSTEST", 00:28:03.887 "trtype": "tcp", 00:28:03.887 "traddr": "10.0.0.2", 00:28:03.887 "adrfam": "ipv4", 00:28:03.887 "trsvcid": "4420", 00:28:03.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.887 "prchk_reftag": false, 00:28:03.887 "prchk_guard": false, 00:28:03.887 "hdgst": false, 00:28:03.887 "ddgst": false, 00:28:03.887 "psk": "key0", 00:28:03.887 "allow_unrecognized_csi": false, 00:28:03.887 "method": "bdev_nvme_attach_controller", 00:28:03.887 "req_id": 1 00:28:03.887 } 00:28:03.887 Got JSON-RPC error response 00:28:03.887 response: 00:28:03.887 { 00:28:03.887 "code": -126, 00:28:03.887 "message": "Required key not available" 00:28:03.887 } 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1765494 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1765494 ']' 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1765494 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1765494 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1765494' 00:28:04.148 killing process with pid 1765494 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1765494 00:28:04.148 Received shutdown signal, test time was about 10.000000 seconds 00:28:04.148 00:28:04.148 Latency(us) 00:28:04.148 [2024-10-08T18:55:32.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.148 [2024-10-08T18:55:32.911Z] =================================================================================================================== 00:28:04.148 [2024-10-08T18:55:32.911Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:04.148 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1765494 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1763630 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1763630 ']' 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1763630 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763630 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763630' 00:28:04.409 killing process with pid 1763630 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1763630 00:28:04.409 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1763630 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1765776 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1765776 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1765776 ']' 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.981 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:04.981 [2024-10-08 20:55:33.667210] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:04.981 [2024-10-08 20:55:33.667332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.244 [2024-10-08 20:55:33.781882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.244 [2024-10-08 20:55:33.993126] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.244 [2024-10-08 20:55:33.993205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.244 [2024-10-08 20:55:33.993227] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.244 [2024-10-08 20:55:33.993245] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.244 [2024-10-08 20:55:33.993260] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.244 [2024-10-08 20:55:33.994146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.183 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:28:06.184 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.184 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:28:06.184 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.N0kPLRkMVb 00:28:06.184 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:06.755 [2024-10-08 20:55:35.327301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.755 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:07.327 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:07.586 [2024-10-08 20:55:36.146940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:07.586 [2024-10-08 20:55:36.147414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.586 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:08.151 malloc0 00:28:08.151 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:08.412 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:08.671 [2024-10-08 20:55:37.427768] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.N0kPLRkMVb': 0100666 00:28:08.671 [2024-10-08 20:55:37.427816] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:08.671 request: 00:28:08.671 { 00:28:08.671 "name": "key0", 00:28:08.671 "path": "/tmp/tmp.N0kPLRkMVb", 00:28:08.671 "method": "keyring_file_add_key", 00:28:08.671 "req_id": 1 00:28:08.671 } 00:28:08.671 Got JSON-RPC error response 00:28:08.671 response: 00:28:08.671 { 00:28:08.671 "code": -1, 00:28:08.671 "message": "Operation not permitted" 00:28:08.671 } 00:28:08.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:28:09.189 [2024-10-08 20:55:37.760859] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:28:09.189 [2024-10-08 20:55:37.760981] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:28:09.189 request: 00:28:09.189 { 00:28:09.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.190 "host": "nqn.2016-06.io.spdk:host1", 00:28:09.190 "psk": "key0", 00:28:09.190 "method": "nvmf_subsystem_add_host", 00:28:09.190 "req_id": 1 00:28:09.190 } 00:28:09.190 Got JSON-RPC error response 00:28:09.190 response: 00:28:09.190 { 00:28:09.190 "code": -32603, 00:28:09.190 "message": "Internal error" 00:28:09.190 } 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1765776 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1765776 ']' 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1765776 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1765776 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1765776' 00:28:09.190 killing process with pid 1765776 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1765776 00:28:09.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1765776 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.N0kPLRkMVb 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1766334 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1766334 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1766334 ']' 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.763 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:09.763 [2024-10-08 20:55:38.356315] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:09.763 [2024-10-08 20:55:38.356408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.763 [2024-10-08 20:55:38.452863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.023 [2024-10-08 20:55:38.647801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.023 [2024-10-08 20:55:38.647912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.023 [2024-10-08 20:55:38.647952] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.023 [2024-10-08 20:55:38.647982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.023 [2024-10-08 20:55:38.648008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.023 [2024-10-08 20:55:38.648940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.N0kPLRkMVb 00:28:10.962 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:11.222 [2024-10-08 20:55:39.805113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.222 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:11.794 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:12.365 [2024-10-08 20:55:40.848526] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:12.365 [2024-10-08 20:55:40.848904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.365 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:12.936 malloc0 00:28:12.936 20:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:13.197 20:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:13.456 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1766880 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1766880 /var/tmp/bdevperf.sock 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1766880 ']' 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:14.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.027 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:14.027 [2024-10-08 20:55:42.759812] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:14.027 [2024-10-08 20:55:42.759915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766880 ] 00:28:14.288 [2024-10-08 20:55:42.867826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.549 [2024-10-08 20:55:43.091370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.488 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.488 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:15.488 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:16.426 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:16.997 [2024-10-08 20:55:45.462830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:16.997 TLSTESTn1 00:28:16.997 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:28:17.257 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:28:17.257 "subsystems": [ 00:28:17.257 { 00:28:17.257 "subsystem": "keyring", 00:28:17.257 "config": [ 00:28:17.257 { 00:28:17.257 "method": "keyring_file_add_key", 00:28:17.257 "params": { 00:28:17.257 "name": "key0", 00:28:17.257 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:17.257 } 00:28:17.257 } 00:28:17.257 ] 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "subsystem": "iobuf", 00:28:17.257 "config": [ 00:28:17.257 { 00:28:17.257 "method": "iobuf_set_options", 00:28:17.257 "params": { 00:28:17.257 "small_pool_count": 8192, 00:28:17.257 "large_pool_count": 1024, 00:28:17.257 "small_bufsize": 8192, 00:28:17.257 "large_bufsize": 135168 00:28:17.257 } 00:28:17.257 } 00:28:17.257 ] 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "subsystem": "sock", 00:28:17.257 "config": [ 00:28:17.257 { 00:28:17.257 "method": "sock_set_default_impl", 00:28:17.257 "params": { 00:28:17.257 "impl_name": "posix" 00:28:17.257 } 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "method": "sock_impl_set_options", 00:28:17.257 "params": { 00:28:17.257 "impl_name": "ssl", 00:28:17.257 "recv_buf_size": 4096, 00:28:17.257 "send_buf_size": 4096, 00:28:17.257 "enable_recv_pipe": true, 00:28:17.257 "enable_quickack": false, 00:28:17.257 "enable_placement_id": 0, 00:28:17.257 "enable_zerocopy_send_server": true, 00:28:17.257 "enable_zerocopy_send_client": false, 00:28:17.257 "zerocopy_threshold": 0, 00:28:17.257 "tls_version": 0, 00:28:17.257 "enable_ktls": false 00:28:17.257 } 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "method": "sock_impl_set_options", 00:28:17.257 "params": { 00:28:17.257 "impl_name": "posix", 00:28:17.257 "recv_buf_size": 2097152, 00:28:17.257 "send_buf_size": 2097152, 00:28:17.257 "enable_recv_pipe": true, 00:28:17.257 "enable_quickack": false, 00:28:17.257 "enable_placement_id": 0, 00:28:17.257 "enable_zerocopy_send_server": true, 00:28:17.257 "enable_zerocopy_send_client": false, 00:28:17.257 "zerocopy_threshold": 0, 00:28:17.257 "tls_version": 0, 00:28:17.257 "enable_ktls": false 00:28:17.257 } 00:28:17.257 } 00:28:17.257 ] 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "subsystem": "vmd", 00:28:17.257 "config": [] 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "subsystem": "accel", 00:28:17.257 "config": [ 00:28:17.257 { 00:28:17.257 "method": "accel_set_options", 00:28:17.257 "params": { 00:28:17.257 "small_cache_size": 128, 00:28:17.257 "large_cache_size": 16, 00:28:17.257 "task_count": 2048, 00:28:17.257 "sequence_count": 2048, 00:28:17.257 "buf_count": 2048 00:28:17.257 } 00:28:17.257 } 00:28:17.257 ] 00:28:17.257 }, 00:28:17.257 { 00:28:17.257 "subsystem": "bdev", 00:28:17.257 "config": [ 00:28:17.257 { 00:28:17.257 "method": "bdev_set_options", 00:28:17.257 "params": { 00:28:17.257 "bdev_io_pool_size": 65535, 00:28:17.257 "bdev_io_cache_size": 256, 00:28:17.257 "bdev_auto_examine": true, 00:28:17.258 "iobuf_small_cache_size": 128, 00:28:17.258 "iobuf_large_cache_size": 16 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_raid_set_options", 00:28:17.258 "params": { 00:28:17.258 "process_window_size_kb": 1024, 00:28:17.258 "process_max_bandwidth_mb_sec": 0 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_iscsi_set_options", 00:28:17.258 "params": { 00:28:17.258 "timeout_sec": 30 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_nvme_set_options", 00:28:17.258 "params": { 00:28:17.258 "action_on_timeout": "none", 00:28:17.258 "timeout_us": 0, 00:28:17.258 "timeout_admin_us": 0, 00:28:17.258 "keep_alive_timeout_ms": 10000, 00:28:17.258 "arbitration_burst": 0, 00:28:17.258 "low_priority_weight": 0, 00:28:17.258 "medium_priority_weight": 0, 00:28:17.258 "high_priority_weight": 0, 00:28:17.258 "nvme_adminq_poll_period_us": 10000, 00:28:17.258 "nvme_ioq_poll_period_us": 0, 00:28:17.258 "io_queue_requests": 0, 00:28:17.258 "delay_cmd_submit": true, 00:28:17.258 "transport_retry_count": 4, 00:28:17.258 "bdev_retry_count": 3, 00:28:17.258 "transport_ack_timeout": 0, 00:28:17.258 "ctrlr_loss_timeout_sec": 0, 00:28:17.258 "reconnect_delay_sec": 0, 00:28:17.258 "fast_io_fail_timeout_sec": 0, 00:28:17.258 "disable_auto_failback": false, 00:28:17.258 "generate_uuids": false, 00:28:17.258 "transport_tos": 0, 00:28:17.258 "nvme_error_stat": false, 00:28:17.258 "rdma_srq_size": 0, 00:28:17.258 "io_path_stat": false, 00:28:17.258 "allow_accel_sequence": false, 00:28:17.258 "rdma_max_cq_size": 0, 00:28:17.258 "rdma_cm_event_timeout_ms": 0, 00:28:17.258 "dhchap_digests": [ 00:28:17.258 "sha256", 00:28:17.258 "sha384", 00:28:17.258 "sha512" 00:28:17.258 ], 00:28:17.258 "dhchap_dhgroups": [ 00:28:17.258 "null", 00:28:17.258 "ffdhe2048", 00:28:17.258 "ffdhe3072", 00:28:17.258 "ffdhe4096", 00:28:17.258 "ffdhe6144", 00:28:17.258 "ffdhe8192" 00:28:17.258 ] 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_nvme_set_hotplug", 00:28:17.258 "params": { 00:28:17.258 "period_us": 100000, 00:28:17.258 "enable": false 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_malloc_create", 00:28:17.258 "params": { 00:28:17.258 "name": "malloc0", 00:28:17.258 "num_blocks": 8192, 00:28:17.258 "block_size": 4096, 00:28:17.258 "physical_block_size": 4096, 00:28:17.258 "uuid": "6c4bd537-7201-4894-9673-7df638261e15", 00:28:17.258 "optimal_io_boundary": 0, 00:28:17.258 "md_size": 0, 00:28:17.258 "dif_type": 0, 00:28:17.258 "dif_is_head_of_md": false, 00:28:17.258 "dif_pi_format": 0 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "bdev_wait_for_examine" 00:28:17.258 } 00:28:17.258 ] 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "subsystem": "nbd", 00:28:17.258 "config": [] 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "subsystem": "scheduler", 00:28:17.258 "config": [ 00:28:17.258 { 00:28:17.258 "method": "framework_set_scheduler", 00:28:17.258 "params": { 00:28:17.258 "name": "static" 00:28:17.258 } 00:28:17.258 } 00:28:17.258 ] 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "subsystem": "nvmf", 00:28:17.258 "config": [ 00:28:17.258 { 00:28:17.258 "method": "nvmf_set_config", 00:28:17.258 "params": { 00:28:17.258 "discovery_filter": "match_any", 00:28:17.258 "admin_cmd_passthru": { 00:28:17.258 "identify_ctrlr": false 00:28:17.258 }, 00:28:17.258 "dhchap_digests": [ 00:28:17.258 "sha256", 00:28:17.258 "sha384", 00:28:17.258 "sha512" 00:28:17.258 ], 00:28:17.258 "dhchap_dhgroups": [ 00:28:17.258 "null", 00:28:17.258 "ffdhe2048", 00:28:17.258 "ffdhe3072", 00:28:17.258 "ffdhe4096", 00:28:17.258 "ffdhe6144", 00:28:17.258 "ffdhe8192" 00:28:17.258 ] 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_set_max_subsystems", 00:28:17.258 "params": { 00:28:17.258 "max_subsystems": 1024 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_set_crdt", 00:28:17.258 "params": { 00:28:17.258 "crdt1": 0, 00:28:17.258 "crdt2": 0, 00:28:17.258 "crdt3": 0 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_create_transport", 00:28:17.258 "params": { 00:28:17.258 "trtype": "TCP", 00:28:17.258 "max_queue_depth": 128, 00:28:17.258 "max_io_qpairs_per_ctrlr": 127, 00:28:17.258 "in_capsule_data_size": 4096, 00:28:17.258 "max_io_size": 131072, 00:28:17.258 "io_unit_size": 131072, 00:28:17.258 "max_aq_depth": 128, 00:28:17.258 "num_shared_buffers": 511, 00:28:17.258 "buf_cache_size": 4294967295, 00:28:17.258 "dif_insert_or_strip": false, 00:28:17.258 "zcopy": false, 00:28:17.258 "c2h_success": false, 00:28:17.258 "sock_priority": 0, 00:28:17.258 "abort_timeout_sec": 1, 00:28:17.258 "ack_timeout": 0, 00:28:17.258 "data_wr_pool_size": 0 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_create_subsystem", 00:28:17.258 "params": { 00:28:17.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.258 "allow_any_host": false, 00:28:17.258 "serial_number": "SPDK00000000000001", 00:28:17.258 "model_number": "SPDK bdev Controller", 00:28:17.258 "max_namespaces": 10, 00:28:17.258 "min_cntlid": 1, 00:28:17.258 "max_cntlid": 65519, 00:28:17.258 "ana_reporting": false 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_subsystem_add_host", 00:28:17.258 "params": { 00:28:17.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.258 "host": "nqn.2016-06.io.spdk:host1", 00:28:17.258 "psk": "key0" 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_subsystem_add_ns", 00:28:17.258 "params": { 00:28:17.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.258 "namespace": { 00:28:17.258 "nsid": 1, 00:28:17.258 "bdev_name": "malloc0", 00:28:17.258 "nguid": "6C4BD5377201489496737DF638261E15", 00:28:17.258 "uuid": "6c4bd537-7201-4894-9673-7df638261e15", 00:28:17.258 "no_auto_visible": false 00:28:17.258 } 00:28:17.258 } 00:28:17.258 }, 00:28:17.258 { 00:28:17.258 "method": "nvmf_subsystem_add_listener", 00:28:17.258 "params": { 00:28:17.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.258 "listen_address": { 00:28:17.258 "trtype": "TCP", 00:28:17.258 "adrfam": "IPv4", 00:28:17.258 "traddr": "10.0.0.2", 00:28:17.258 "trsvcid": "4420" 00:28:17.258 }, 00:28:17.258 "secure_channel": true 00:28:17.258 } 00:28:17.258 } 00:28:17.258 ] 00:28:17.258 } 00:28:17.258 ] 00:28:17.258 }' 00:28:17.258 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:28:17.829 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:28:17.829 "subsystems": [ 00:28:17.829 { 00:28:17.829 "subsystem": "keyring", 00:28:17.829 "config": [ 00:28:17.829 { 00:28:17.829 "method": "keyring_file_add_key", 00:28:17.829 "params": { 00:28:17.829 "name": "key0", 00:28:17.829 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:17.829 } 00:28:17.829 } 00:28:17.829 ] 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "subsystem": "iobuf", 00:28:17.829 "config": [ 00:28:17.829 { 00:28:17.829 "method": "iobuf_set_options", 00:28:17.829 "params": { 00:28:17.829 "small_pool_count": 8192, 00:28:17.829 "large_pool_count": 1024, 00:28:17.829 "small_bufsize": 8192, 00:28:17.829 "large_bufsize": 135168 00:28:17.829 } 00:28:17.829 } 00:28:17.829 ] 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "subsystem": "sock", 00:28:17.829 "config": [ 00:28:17.829 { 00:28:17.829 "method": "sock_set_default_impl", 00:28:17.829 "params": { 00:28:17.829 "impl_name": "posix" 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "sock_impl_set_options", 00:28:17.829 "params": { 00:28:17.829 "impl_name": "ssl", 00:28:17.829 "recv_buf_size": 4096, 00:28:17.829 "send_buf_size": 4096, 00:28:17.829 "enable_recv_pipe": true, 00:28:17.829 "enable_quickack": false, 00:28:17.829 "enable_placement_id": 0, 00:28:17.829 "enable_zerocopy_send_server": true, 00:28:17.829 "enable_zerocopy_send_client": false, 00:28:17.829 "zerocopy_threshold": 0, 00:28:17.829 "tls_version": 0, 00:28:17.829 "enable_ktls": false 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "sock_impl_set_options", 00:28:17.829 "params": { 00:28:17.829 "impl_name": "posix", 00:28:17.829 "recv_buf_size": 2097152, 00:28:17.829 "send_buf_size": 2097152, 00:28:17.829 "enable_recv_pipe": true, 00:28:17.829 "enable_quickack": false, 00:28:17.829 "enable_placement_id": 0, 00:28:17.829 "enable_zerocopy_send_server": true, 00:28:17.829 "enable_zerocopy_send_client": false, 00:28:17.829 "zerocopy_threshold": 0, 00:28:17.829 "tls_version": 0, 00:28:17.829 "enable_ktls": false 00:28:17.829 } 00:28:17.829 } 00:28:17.829 ] 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "subsystem": "vmd", 00:28:17.829 "config": [] 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "subsystem": "accel", 00:28:17.829 "config": [ 00:28:17.829 { 00:28:17.829 "method": "accel_set_options", 00:28:17.829 "params": { 00:28:17.829 "small_cache_size": 128, 00:28:17.829 "large_cache_size": 16, 00:28:17.829 "task_count": 2048, 00:28:17.829 "sequence_count": 2048, 00:28:17.829 "buf_count": 2048 00:28:17.829 } 00:28:17.829 } 00:28:17.829 ] 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "subsystem": "bdev", 00:28:17.829 "config": [ 00:28:17.829 { 00:28:17.829 "method": "bdev_set_options", 00:28:17.829 "params": { 00:28:17.829 "bdev_io_pool_size": 65535, 00:28:17.829 "bdev_io_cache_size": 256, 00:28:17.829 "bdev_auto_examine": true, 00:28:17.829 "iobuf_small_cache_size": 128, 00:28:17.829 "iobuf_large_cache_size": 16 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "bdev_raid_set_options", 00:28:17.829 "params": { 00:28:17.829 "process_window_size_kb": 1024, 00:28:17.829 "process_max_bandwidth_mb_sec": 0 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "bdev_iscsi_set_options", 00:28:17.829 "params": { 00:28:17.829 "timeout_sec": 30 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "bdev_nvme_set_options", 00:28:17.829 "params": { 00:28:17.829 "action_on_timeout": "none", 00:28:17.829 "timeout_us": 0, 00:28:17.829 "timeout_admin_us": 0, 00:28:17.829 "keep_alive_timeout_ms": 10000, 00:28:17.829 "arbitration_burst": 0, 00:28:17.829 "low_priority_weight": 0, 00:28:17.829 "medium_priority_weight": 0, 00:28:17.829 "high_priority_weight": 0, 00:28:17.829 "nvme_adminq_poll_period_us": 10000, 00:28:17.829 "nvme_ioq_poll_period_us": 0, 00:28:17.829 "io_queue_requests": 512, 00:28:17.829 "delay_cmd_submit": true, 00:28:17.829 "transport_retry_count": 4, 00:28:17.829 "bdev_retry_count": 3, 00:28:17.829 "transport_ack_timeout": 0, 00:28:17.829 "ctrlr_loss_timeout_sec": 0, 00:28:17.829 "reconnect_delay_sec": 0, 00:28:17.829 "fast_io_fail_timeout_sec": 0, 00:28:17.829 "disable_auto_failback": false, 00:28:17.829 "generate_uuids": false, 00:28:17.829 "transport_tos": 0, 00:28:17.829 "nvme_error_stat": false, 00:28:17.829 "rdma_srq_size": 0, 00:28:17.829 "io_path_stat": false, 00:28:17.829 "allow_accel_sequence": false, 00:28:17.829 "rdma_max_cq_size": 0, 00:28:17.829 "rdma_cm_event_timeout_ms": 0, 00:28:17.829 "dhchap_digests": [ 00:28:17.829 "sha256", 00:28:17.829 "sha384", 00:28:17.829 "sha512" 00:28:17.829 ], 00:28:17.829 "dhchap_dhgroups": [ 00:28:17.829 "null", 00:28:17.829 "ffdhe2048", 00:28:17.829 "ffdhe3072", 00:28:17.829 "ffdhe4096", 00:28:17.829 "ffdhe6144", 00:28:17.829 "ffdhe8192" 00:28:17.829 ] 00:28:17.829 } 00:28:17.829 }, 00:28:17.829 { 00:28:17.829 "method": "bdev_nvme_attach_controller", 00:28:17.829 "params": { 00:28:17.829 "name": "TLSTEST", 00:28:17.829 "trtype": "TCP", 00:28:17.829 "adrfam": "IPv4", 00:28:17.829 "traddr": "10.0.0.2", 00:28:17.829 "trsvcid": "4420", 00:28:17.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.829 "prchk_reftag": false, 00:28:17.829 "prchk_guard": false, 00:28:17.829 "ctrlr_loss_timeout_sec": 0, 00:28:17.829 "reconnect_delay_sec": 0, 00:28:17.830 "fast_io_fail_timeout_sec": 0, 00:28:17.830 "psk": "key0", 00:28:17.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.830 "hdgst": false, 00:28:17.830 "ddgst": false, 00:28:17.830 "multipath": "multipath" 00:28:17.830 } 00:28:17.830 }, 00:28:17.830 { 00:28:17.830 "method": "bdev_nvme_set_hotplug", 00:28:17.830 "params": { 00:28:17.830 "period_us": 100000, 00:28:17.830 "enable": false 00:28:17.830 } 00:28:17.830 }, 00:28:17.830 { 00:28:17.830 "method": "bdev_wait_for_examine" 00:28:17.830 } 00:28:17.830 ] 00:28:17.830 }, 00:28:17.830 { 00:28:17.830 "subsystem": "nbd", 00:28:17.830 "config": [] 00:28:17.830 } 00:28:17.830 ] 00:28:17.830 }' 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1766880 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1766880 ']' 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1766880 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766880 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766880' 00:28:17.830 killing process with pid 1766880 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1766880 00:28:17.830 Received shutdown signal, test time was about 10.000000 seconds 00:28:17.830 00:28:17.830 Latency(us) 00:28:17.830 [2024-10-08T18:55:46.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.830 [2024-10-08T18:55:46.593Z] =================================================================================================================== 00:28:17.830 [2024-10-08T18:55:46.593Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:17.830 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1766880 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1766334 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1766334 ']' 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1766334 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766334 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766334' 00:28:18.397 killing process with pid 1766334 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1766334 00:28:18.397 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1766334 00:28:18.969 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:28:18.969 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:18.969 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.969 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:28:18.969 "subsystems": [ 00:28:18.969 { 00:28:18.969 "subsystem": "keyring", 00:28:18.969 "config": [ 00:28:18.969 { 00:28:18.969 "method": "keyring_file_add_key", 00:28:18.969 "params": { 00:28:18.969 "name": "key0", 00:28:18.969 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:18.969 } 00:28:18.969 } 00:28:18.969 ] 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "subsystem": "iobuf", 00:28:18.969 "config": [ 00:28:18.969 { 00:28:18.969 "method": "iobuf_set_options", 00:28:18.969 "params": { 00:28:18.969 "small_pool_count": 8192, 00:28:18.969 "large_pool_count": 1024, 00:28:18.969 "small_bufsize": 8192, 00:28:18.969 "large_bufsize": 135168 00:28:18.969 } 00:28:18.969 } 00:28:18.969 ] 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "subsystem": "sock", 00:28:18.969 "config": [ 00:28:18.969 { 00:28:18.969 "method": "sock_set_default_impl", 00:28:18.969 "params": { 00:28:18.969 "impl_name": "posix" 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "sock_impl_set_options", 00:28:18.969 "params": { 00:28:18.969 "impl_name": "ssl", 00:28:18.969 "recv_buf_size": 4096, 00:28:18.969 "send_buf_size": 4096, 00:28:18.969 "enable_recv_pipe": true, 00:28:18.969 "enable_quickack": false, 00:28:18.969 "enable_placement_id": 0, 00:28:18.969 "enable_zerocopy_send_server": true, 00:28:18.969 "enable_zerocopy_send_client": false, 00:28:18.969 "zerocopy_threshold": 0, 00:28:18.969 "tls_version": 0, 00:28:18.969 "enable_ktls": false 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "sock_impl_set_options", 00:28:18.969 "params": { 00:28:18.969 "impl_name": "posix", 00:28:18.969 "recv_buf_size": 2097152, 00:28:18.969 "send_buf_size": 2097152, 00:28:18.969 "enable_recv_pipe": true, 00:28:18.969 "enable_quickack": false, 00:28:18.969 "enable_placement_id": 0, 00:28:18.969 "enable_zerocopy_send_server": true, 00:28:18.969 "enable_zerocopy_send_client": false, 00:28:18.969 "zerocopy_threshold": 0, 00:28:18.969 "tls_version": 0, 00:28:18.969 "enable_ktls": false 00:28:18.969 } 00:28:18.969 } 00:28:18.969 ] 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "subsystem": "vmd", 00:28:18.969 "config": [] 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "subsystem": "accel", 00:28:18.969 "config": [ 00:28:18.969 { 00:28:18.969 "method": "accel_set_options", 00:28:18.969 "params": { 00:28:18.969 "small_cache_size": 128, 00:28:18.969 "large_cache_size": 16, 00:28:18.969 "task_count": 2048, 00:28:18.969 "sequence_count": 2048, 00:28:18.969 "buf_count": 2048 00:28:18.969 } 00:28:18.969 } 00:28:18.969 ] 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "subsystem": "bdev", 00:28:18.969 "config": [ 00:28:18.969 { 00:28:18.969 "method": "bdev_set_options", 00:28:18.969 "params": { 00:28:18.969 "bdev_io_pool_size": 65535, 00:28:18.969 "bdev_io_cache_size": 256, 00:28:18.969 "bdev_auto_examine": true, 00:28:18.969 "iobuf_small_cache_size": 128, 00:28:18.969 "iobuf_large_cache_size": 16 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_raid_set_options", 00:28:18.969 "params": { 00:28:18.969 "process_window_size_kb": 1024, 00:28:18.969 "process_max_bandwidth_mb_sec": 0 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_iscsi_set_options", 00:28:18.969 "params": { 00:28:18.969 "timeout_sec": 30 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_nvme_set_options", 00:28:18.969 "params": { 00:28:18.969 "action_on_timeout": "none", 00:28:18.969 "timeout_us": 0, 00:28:18.969 "timeout_admin_us": 0, 00:28:18.969 "keep_alive_timeout_ms": 10000, 00:28:18.969 "arbitration_burst": 0, 00:28:18.969 "low_priority_weight": 0, 00:28:18.969 "medium_priority_weight": 0, 00:28:18.969 "high_priority_weight": 0, 00:28:18.969 "nvme_adminq_poll_period_us": 10000, 00:28:18.969 "nvme_ioq_poll_period_us": 0, 00:28:18.969 "io_queue_requests": 0, 00:28:18.969 "delay_cmd_submit": true, 00:28:18.969 "transport_retry_count": 4, 00:28:18.969 "bdev_retry_count": 3, 00:28:18.969 "transport_ack_timeout": 0, 00:28:18.969 "ctrlr_loss_timeout_sec": 0, 00:28:18.969 "reconnect_delay_sec": 0, 00:28:18.969 "fast_io_fail_timeout_sec": 0, 00:28:18.969 "disable_auto_failback": false, 00:28:18.969 "generate_uuids": false, 00:28:18.969 "transport_tos": 0, 00:28:18.969 "nvme_error_stat": false, 00:28:18.969 "rdma_srq_size": 0, 00:28:18.969 "io_path_stat": false, 00:28:18.969 "allow_accel_sequence": false, 00:28:18.969 "rdma_max_cq_size": 0, 00:28:18.969 "rdma_cm_event_timeout_ms": 0, 00:28:18.969 "dhchap_digests": [ 00:28:18.969 "sha256", 00:28:18.969 "sha384", 00:28:18.969 "sha512" 00:28:18.969 ], 00:28:18.969 "dhchap_dhgroups": [ 00:28:18.969 "null", 00:28:18.969 "ffdhe2048", 00:28:18.969 "ffdhe3072", 00:28:18.969 "ffdhe4096", 00:28:18.969 "ffdhe6144", 00:28:18.969 "ffdhe8192" 00:28:18.969 ] 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_nvme_set_hotplug", 00:28:18.969 "params": { 00:28:18.969 "period_us": 100000, 00:28:18.969 "enable": false 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_malloc_create", 00:28:18.969 "params": { 00:28:18.969 "name": "malloc0", 00:28:18.969 "num_blocks": 8192, 00:28:18.969 "block_size": 4096, 00:28:18.969 "physical_block_size": 4096, 00:28:18.969 "uuid": "6c4bd537-7201-4894-9673-7df638261e15", 00:28:18.969 "optimal_io_boundary": 0, 00:28:18.969 "md_size": 0, 00:28:18.969 "dif_type": 0, 00:28:18.969 "dif_is_head_of_md": false, 00:28:18.969 "dif_pi_format": 0 00:28:18.969 } 00:28:18.969 }, 00:28:18.969 { 00:28:18.969 "method": "bdev_wait_for_examine" 00:28:18.969 } 00:28:18.969 ] 00:28:18.969 }, 00:28:18.969 { 00:28:18.970 "subsystem": "nbd", 00:28:18.970 "config": [] 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "subsystem": "scheduler", 00:28:18.970 "config": [ 00:28:18.970 { 00:28:18.970 "method": "framework_set_scheduler", 00:28:18.970 "params": { 00:28:18.970 "name": "static" 00:28:18.970 } 00:28:18.970 } 00:28:18.970 ] 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "subsystem": "nvmf", 00:28:18.970 "config": [ 00:28:18.970 { 00:28:18.970 "method": "nvmf_set_config", 00:28:18.970 "params": { 00:28:18.970 "discovery_filter": "match_any", 00:28:18.970 "admin_cmd_passthru": { 00:28:18.970 "identify_ctrlr": false 00:28:18.970 }, 00:28:18.970 "dhchap_digests": [ 00:28:18.970 "sha256", 00:28:18.970 "sha384", 00:28:18.970 "sha512" 00:28:18.970 ], 00:28:18.970 "dhchap_dhgroups": [ 00:28:18.970 "null", 00:28:18.970 "ffdhe2048", 00:28:18.970 "ffdhe3072", 00:28:18.970 "ffdhe4096", 00:28:18.970 "ffdhe6144", 00:28:18.970 "ffdhe8192" 00:28:18.970 ] 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_set_max_subsystems", 00:28:18.970 "params": { 00:28:18.970 "max_subsystems": 1024 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_set_crdt", 00:28:18.970 "params": { 00:28:18.970 "crdt1": 0, 00:28:18.970 "crdt2": 0, 00:28:18.970 "crdt3": 0 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_create_transport", 00:28:18.970 "params": { 00:28:18.970 "trtype": "TCP", 00:28:18.970 "max_queue_depth": 128, 00:28:18.970 "max_io_qpairs_per_ctrlr": 127, 00:28:18.970 "in_capsule_data_size": 4096, 00:28:18.970 "max_io_size": 131072, 00:28:18.970 "io_unit_size": 131072, 00:28:18.970 "max_aq_depth": 128, 00:28:18.970 "num_shared_buffers": 511, 00:28:18.970 "buf_cache_size": 4294967295, 00:28:18.970 "dif_insert_or_strip": false, 00:28:18.970 "zcopy": false, 00:28:18.970 "c2h_success": false, 00:28:18.970 "sock_priority": 0, 00:28:18.970 "abort_timeout_sec": 1, 00:28:18.970 "ack_timeout": 0, 00:28:18.970 "data_wr_pool_size": 0 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_create_subsystem", 00:28:18.970 "params": { 00:28:18.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.970 "allow_any_host": false, 00:28:18.970 "serial_number": "SPDK00000000000001", 00:28:18.970 "model_number": "SPDK bdev Controller", 00:28:18.970 "max_namespaces": 10, 00:28:18.970 "min_cntlid": 1, 00:28:18.970 "max_cntlid": 65519, 00:28:18.970 "ana_reporting": false 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_subsystem_add_host", 00:28:18.970 "params": { 00:28:18.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.970 "host": "nqn.2016-06.io.spdk:host1", 00:28:18.970 "psk": "key0" 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_subsystem_add_ns", 00:28:18.970 "params": { 00:28:18.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.970 "namespace": { 00:28:18.970 "nsid": 1, 00:28:18.970 "bdev_name": "malloc0", 00:28:18.970 "nguid": "6C4BD5377201489496737DF638261E15", 00:28:18.970 "uuid": "6c4bd537-7201-4894-9673-7df638261e15", 00:28:18.970 "no_auto_visible": false 00:28:18.970 } 00:28:18.970 } 00:28:18.970 }, 00:28:18.970 { 00:28:18.970 "method": "nvmf_subsystem_add_listener", 00:28:18.970 "params": { 00:28:18.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.970 "listen_address": { 00:28:18.970 "trtype": "TCP", 00:28:18.970 "adrfam": "IPv4", 00:28:18.970 "traddr": "10.0.0.2", 00:28:18.970 "trsvcid": "4420" 00:28:18.970 }, 00:28:18.970 "secure_channel": true 00:28:18.970 } 00:28:18.970 } 00:28:18.970 ] 00:28:18.970 } 00:28:18.970 ] 00:28:18.970 }' 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1767420 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1767420 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1767420 ']' 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.970 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:18.970 [2024-10-08 20:55:47.579787] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:18.970 [2024-10-08 20:55:47.579955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.230 [2024-10-08 20:55:47.738146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.230 [2024-10-08 20:55:47.942864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.230 [2024-10-08 20:55:47.942934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.230 [2024-10-08 20:55:47.942950] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.230 [2024-10-08 20:55:47.942964] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.230 [2024-10-08 20:55:47.942975] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.230 [2024-10-08 20:55:47.943609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.487 [2024-10-08 20:55:48.205477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.487 [2024-10-08 20:55:48.237501] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:19.487 [2024-10-08 20:55:48.237801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1767566 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1767566 /var/tmp/bdevperf.sock 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1767566 ']' 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:28:19.753 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:28:19.753 "subsystems": [ 00:28:19.753 { 00:28:19.753 "subsystem": "keyring", 00:28:19.753 "config": [ 00:28:19.753 { 00:28:19.753 "method": "keyring_file_add_key", 00:28:19.753 "params": { 00:28:19.753 "name": "key0", 00:28:19.753 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:19.753 } 00:28:19.753 } 00:28:19.753 ] 00:28:19.753 }, 00:28:19.753 { 00:28:19.753 "subsystem": "iobuf", 00:28:19.753 "config": [ 00:28:19.753 { 00:28:19.753 "method": "iobuf_set_options", 00:28:19.753 "params": { 00:28:19.753 "small_pool_count": 8192, 00:28:19.753 "large_pool_count": 1024, 00:28:19.753 "small_bufsize": 8192, 00:28:19.753 "large_bufsize": 135168 00:28:19.753 } 00:28:19.753 } 00:28:19.753 ] 00:28:19.753 }, 00:28:19.753 { 00:28:19.753 "subsystem": "sock", 00:28:19.753 "config": [ 00:28:19.753 { 00:28:19.753 "method": "sock_set_default_impl", 00:28:19.753 "params": { 00:28:19.753 "impl_name": "posix" 00:28:19.753 } 00:28:19.753 }, 00:28:19.753 { 00:28:19.753 "method": "sock_impl_set_options", 00:28:19.753 "params": { 00:28:19.753 "impl_name": "ssl", 00:28:19.753 "recv_buf_size": 4096, 00:28:19.753 "send_buf_size": 4096, 00:28:19.753 "enable_recv_pipe": true, 00:28:19.753 "enable_quickack": false, 00:28:19.753 "enable_placement_id": 0, 00:28:19.753 "enable_zerocopy_send_server": true, 00:28:19.753 "enable_zerocopy_send_client": false, 00:28:19.753 "zerocopy_threshold": 0, 00:28:19.753 "tls_version": 0, 00:28:19.753 "enable_ktls": false 00:28:19.754 } 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "method": "sock_impl_set_options", 00:28:19.754 "params": { 00:28:19.754 "impl_name": "posix", 00:28:19.754 "recv_buf_size": 2097152, 00:28:19.754 "send_buf_size": 2097152, 00:28:19.754 "enable_recv_pipe": true, 00:28:19.754 "enable_quickack": false, 00:28:19.754 "enable_placement_id": 0, 00:28:19.754 "enable_zerocopy_send_server": true, 00:28:19.754 "enable_zerocopy_send_client": false, 00:28:19.754 "zerocopy_threshold": 0, 00:28:19.754 "tls_version": 0, 00:28:19.754 "enable_ktls": false 00:28:19.754 } 00:28:19.754 } 00:28:19.754 ] 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "subsystem": "vmd", 00:28:19.754 "config": [] 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "subsystem": "accel", 00:28:19.754 "config": [ 00:28:19.754 { 00:28:19.754 "method": "accel_set_options", 00:28:19.754 "params": { 00:28:19.754 "small_cache_size": 128, 00:28:19.754 "large_cache_size": 16, 00:28:19.754 "task_count": 2048, 00:28:19.754 "sequence_count": 2048, 00:28:19.754 "buf_count": 2048 00:28:19.754 } 00:28:19.754 } 00:28:19.754 ] 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "subsystem": "bdev", 00:28:19.754 "config": [ 00:28:19.754 { 00:28:19.754 "method": "bdev_set_options", 00:28:19.754 "params": { 00:28:19.754 "bdev_io_pool_size": 65535, 00:28:19.754 "bdev_io_cache_size": 256, 00:28:19.754 "bdev_auto_examine": true, 00:28:19.754 "iobuf_small_cache_size": 128, 00:28:19.754 "iobuf_large_cache_size": 16 00:28:19.754 } 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "method": "bdev_raid_set_options", 00:28:19.754 "params": { 00:28:19.754 "process_window_size_kb": 1024, 00:28:19.754 "process_max_bandwidth_mb_sec": 0 00:28:19.754 } 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "method": "bdev_iscsi_set_options", 00:28:19.754 "params": { 00:28:19.754 "timeout_sec": 30 00:28:19.754 } 00:28:19.754 }, 00:28:19.754 { 00:28:19.754 "method": "bdev_nvme_set_options", 00:28:19.754 "params": { 00:28:19.754 "action_on_timeout": "none", 00:28:19.754 "timeout_us": 0, 00:28:19.754 "timeout_admin_us": 0, 00:28:19.754 "keep_alive_timeout_ms": 10000, 00:28:19.754 "arbitration_burst": 0, 00:28:19.754 "low_priority_weight": 0, 00:28:19.754 "medium_priority_weight": 0, 00:28:19.754 "high_priority_weight": 0, 00:28:19.754 "nvme_adminq_poll_period_us": 10000, 00:28:19.754 "nvme_ioq_poll_period_us": 0, 00:28:19.754 "io_queue_requests": 512, 00:28:19.754 "delay_cmd_submit": true, 00:28:19.754 "transport_retry_count": 4, 00:28:19.754 "bdev_retry_count": 3, 00:28:19.754 "transport_ack_timeout": 0, 00:28:19.754 "ctrlr_loss_timeout_sec": 0, 00:28:19.754 "reconnect_delay_sec": 0, 00:28:19.754 "fast_io_fail_timeout_sec": 0, 00:28:19.754 "disable_auto_failback": false, 00:28:19.754 "generate_uuids": false, 00:28:19.754 "transport_tos": 0, 00:28:19.754 "nvme_error_stat": false, 00:28:19.754 "rdma_srq_size": 0, 00:28:19.754 "io_path_stat": false, 00:28:19.754 "allow_accel_sequence": false, 00:28:19.754 "rdma_max_cq_size": 0, 00:28:19.754 "rdma_cm_event_timeout_ms": 0, 00:28:19.754 "dhchap_digests": [ 00:28:19.754 "sha256", 00:28:19.754 "sha384", 00:28:19.754 "sha512" 00:28:19.754 ], 00:28:19.754 "dhchap_dhgroups": [ 00:28:19.754 "null", 00:28:19.754 "ffdhe2048", 00:28:19.754 "ffdhe3072", 00:28:19.754 "ffdhe4096", 00:28:19.754 "ffdhe6144", 00:28:19.754 "ffdhe8192" 00:28:19.754 ] 00:28:19.754 } 00:28:19.754 }, 00:28:19.755 { 00:28:19.755 "method": "bdev_nvme_attach_controller", 00:28:19.755 "params": { 00:28:19.755 "name": "TLSTEST", 00:28:19.755 "trtype": "TCP", 00:28:19.755 "adrfam": "IPv4", 00:28:19.755 "traddr": "10.0.0.2", 00:28:19.755 "trsvcid": "4420", 00:28:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.755 "prchk_reftag": false, 00:28:19.755 "prchk_guard": false, 00:28:19.755 "ctrlr_loss_timeout_sec": 0, 00:28:19.755 "reconnect_delay_sec": 0, 00:28:19.755 "fast_io_fail_timeout_sec": 0, 00:28:19.755 "psk": "key0", 00:28:19.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.755 "hdgst": false, 00:28:19.755 "ddgst": false, 00:28:19.755 "multipath": "multipath" 00:28:19.755 } 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "method": "bdev_nvme_set_hotplug", 00:28:19.755 "params": { 00:28:19.755 "period_us": 100000, 00:28:19.755 "enable": false 00:28:19.755 } 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "method": "bdev_wait_for_examine" 00:28:19.755 } 00:28:19.755 ] 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "subsystem": "nbd", 00:28:19.755 "config": [] 00:28:19.755 } 00:28:19.755 ] 00:28:19.755 }' 00:28:19.755 [2024-10-08 20:55:48.387021] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:19.755 [2024-10-08 20:55:48.387117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767566 ] 00:28:19.755 [2024-10-08 20:55:48.461573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.016 [2024-10-08 20:55:48.588895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.276 [2024-10-08 20:55:48.779947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:20.845 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.845 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:20.845 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:28:21.106 Running I/O for 10 seconds... 00:28:22.986 1581.00 IOPS, 6.18 MiB/s [2024-10-08T18:55:52.686Z] 1550.50 IOPS, 6.06 MiB/s [2024-10-08T18:55:54.066Z] 1997.67 IOPS, 7.80 MiB/s [2024-10-08T18:55:55.039Z] 1906.75 IOPS, 7.45 MiB/s [2024-10-08T18:55:55.994Z] 2106.40 IOPS, 8.23 MiB/s [2024-10-08T18:55:56.932Z] 2048.00 IOPS, 8.00 MiB/s [2024-10-08T18:55:57.872Z] 2127.00 IOPS, 8.31 MiB/s [2024-10-08T18:55:58.813Z] 2056.38 IOPS, 8.03 MiB/s [2024-10-08T18:55:59.752Z] 1995.33 IOPS, 7.79 MiB/s [2024-10-08T18:55:59.752Z] 1946.20 IOPS, 7.60 MiB/s 00:28:30.989 Latency(us) 00:28:30.989 [2024-10-08T18:55:59.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.989 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:30.989 Verification LBA range: start 0x0 length 0x2000 00:28:30.989 TLSTESTn1 : 10.05 1949.83 7.62 0.00 0.00 65466.26 15340.28 80779.19 00:28:30.989 [2024-10-08T18:55:59.752Z] =================================================================================================================== 00:28:30.989 [2024-10-08T18:55:59.752Z] Total : 1949.83 7.62 0.00 0.00 65466.26 15340.28 80779.19 00:28:30.989 { 00:28:30.989 "results": [ 00:28:30.989 { 00:28:30.989 "job": "TLSTESTn1", 00:28:30.989 "core_mask": "0x4", 00:28:30.989 "workload": "verify", 00:28:30.989 "status": "finished", 00:28:30.989 "verify_range": { 00:28:30.989 "start": 0, 00:28:30.989 "length": 8192 00:28:30.989 }, 00:28:30.989 "queue_depth": 128, 00:28:30.989 "io_size": 4096, 00:28:30.989 "runtime": 10.046523, 00:28:30.989 "iops": 1949.8288114206277, 00:28:30.989 "mibps": 7.616518794611827, 00:28:30.989 "io_failed": 0, 00:28:30.989 "io_timeout": 0, 00:28:30.989 "avg_latency_us": 65466.26384648981, 00:28:30.989 "min_latency_us": 15340.278518518518, 00:28:30.989 "max_latency_us": 80779.18814814815 00:28:30.989 } 00:28:30.989 ], 00:28:30.989 "core_count": 1 00:28:30.989 } 00:28:31.248 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1767566 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1767566 ']' 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1767566 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767566 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767566' 00:28:31.249 killing process with pid 1767566 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1767566 00:28:31.249 Received shutdown signal, test time was about 10.000000 seconds 00:28:31.249 00:28:31.249 Latency(us) 00:28:31.249 [2024-10-08T18:56:00.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.249 [2024-10-08T18:56:00.012Z] =================================================================================================================== 00:28:31.249 [2024-10-08T18:56:00.012Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.249 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1767566 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1767420 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1767420 ']' 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1767420 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767420 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767420' 00:28:31.508 killing process with pid 1767420 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1767420 00:28:31.508 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1767420 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1768902 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1768902 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1768902 ']' 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.078 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 [2024-10-08 20:56:00.761260] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:32.078 [2024-10-08 20:56:00.761436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.338 [2024-10-08 20:56:00.916092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.597 [2024-10-08 20:56:01.133930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.597 [2024-10-08 20:56:01.134034] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.597 [2024-10-08 20:56:01.134070] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.597 [2024-10-08 20:56:01.134100] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.597 [2024-10-08 20:56:01.134126] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.597 [2024-10-08 20:56:01.135304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.597 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.597 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:32.597 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:32.597 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.597 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:32.856 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.856 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.N0kPLRkMVb 00:28:32.856 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.N0kPLRkMVb 00:28:32.856 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:33.423 [2024-10-08 20:56:01.992350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.423 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:33.681 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:33.941 [2024-10-08 20:56:02.642126] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:33.941 [2024-10-08 20:56:02.642404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.941 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:34.881 malloc0 00:28:34.881 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:35.450 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:36.021 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1769453 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1769453 /var/tmp/bdevperf.sock 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1769453 ']' 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.281 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:36.281 [2024-10-08 20:56:05.032434] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:36.281 [2024-10-08 20:56:05.032535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769453 ] 00:28:36.540 [2024-10-08 20:56:05.126266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.799 [2024-10-08 20:56:05.318743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.739 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.739 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:37.739 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:37.999 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:28:38.568 [2024-10-08 20:56:07.023200] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:38.569 nvme0n1 00:28:38.569 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:38.569 Running I/O for 1 seconds... 00:28:39.767 1465.00 IOPS, 5.72 MiB/s 00:28:39.767 Latency(us) 00:28:39.767 [2024-10-08T18:56:08.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.767 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:39.767 Verification LBA range: start 0x0 length 0x2000 00:28:39.767 nvme0n1 : 1.05 1520.14 5.94 0.00 0.00 82359.60 14272.28 57865.86 00:28:39.767 [2024-10-08T18:56:08.530Z] =================================================================================================================== 00:28:39.767 [2024-10-08T18:56:08.530Z] Total : 1520.14 5.94 0.00 0.00 82359.60 14272.28 57865.86 00:28:39.767 { 00:28:39.767 "results": [ 00:28:39.767 { 00:28:39.767 "job": "nvme0n1", 00:28:39.767 "core_mask": "0x2", 00:28:39.767 "workload": "verify", 00:28:39.767 "status": "finished", 00:28:39.767 "verify_range": { 00:28:39.767 "start": 0, 00:28:39.767 "length": 8192 00:28:39.767 }, 00:28:39.767 "queue_depth": 128, 00:28:39.767 "io_size": 4096, 00:28:39.767 "runtime": 1.047928, 00:28:39.767 "iops": 1520.1426052171523, 00:28:39.767 "mibps": 5.938057051629501, 00:28:39.767 "io_failed": 0, 00:28:39.767 "io_timeout": 0, 00:28:39.767 "avg_latency_us": 82359.60362139918, 00:28:39.767 "min_latency_us": 14272.284444444444, 00:28:39.767 "max_latency_us": 57865.86074074074 00:28:39.767 } 00:28:39.767 ], 00:28:39.767 "core_count": 1 00:28:39.767 } 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1769453 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1769453 ']' 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1769453 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1769453 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1769453' 00:28:39.767 killing process with pid 1769453 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1769453 00:28:39.767 Received shutdown signal, test time was about 1.000000 seconds 00:28:39.767 00:28:39.767 Latency(us) 00:28:39.767 [2024-10-08T18:56:08.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.767 [2024-10-08T18:56:08.530Z] =================================================================================================================== 00:28:39.767 [2024-10-08T18:56:08.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.767 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1769453 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1768902 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1768902 ']' 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1768902 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768902 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768902' 00:28:40.334 killing process with pid 1768902 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1768902 00:28:40.334 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1768902 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1769984 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1769984 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1769984 ']' 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:40.903 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:40.903 [2024-10-08 20:56:09.441497] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:40.903 [2024-10-08 20:56:09.441604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.903 [2024-10-08 20:56:09.558113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.164 [2024-10-08 20:56:09.788019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.164 [2024-10-08 20:56:09.788124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.164 [2024-10-08 20:56:09.788160] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.164 [2024-10-08 20:56:09.788190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.164 [2024-10-08 20:56:09.788215] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.164 [2024-10-08 20:56:09.789602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.425 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:41.425 [2024-10-08 20:56:10.142253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.425 malloc0 00:28:41.685 [2024-10-08 20:56:10.198583] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:41.685 [2024-10-08 20:56:10.199120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1770037 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1770037 /var/tmp/bdevperf.sock 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1770037 ']' 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:41.685 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.686 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:41.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:41.686 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.686 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:41.686 [2024-10-08 20:56:10.306883] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:41.686 [2024-10-08 20:56:10.307036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770037 ] 00:28:41.686 [2024-10-08 20:56:10.419135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.946 [2024-10-08 20:56:10.653096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.207 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.207 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:42.207 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0kPLRkMVb 00:28:42.776 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:28:43.715 [2024-10-08 20:56:12.117363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:43.715 nvme0n1 00:28:43.715 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.715 Running I/O for 1 seconds... 00:28:45.097 1526.00 IOPS, 5.96 MiB/s 00:28:45.097 Latency(us) 00:28:45.097 [2024-10-08T18:56:13.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.097 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:45.097 Verification LBA range: start 0x0 length 0x2000 00:28:45.097 nvme0n1 : 1.06 1566.88 6.12 0.00 0.00 80063.75 15437.37 59807.67 00:28:45.097 [2024-10-08T18:56:13.860Z] =================================================================================================================== 00:28:45.097 [2024-10-08T18:56:13.860Z] Total : 1566.88 6.12 0.00 0.00 80063.75 15437.37 59807.67 00:28:45.097 { 00:28:45.097 "results": [ 00:28:45.097 { 00:28:45.097 "job": "nvme0n1", 00:28:45.097 "core_mask": "0x2", 00:28:45.097 "workload": "verify", 00:28:45.097 "status": "finished", 00:28:45.097 "verify_range": { 00:28:45.097 "start": 0, 00:28:45.097 "length": 8192 00:28:45.097 }, 00:28:45.097 "queue_depth": 128, 00:28:45.097 "io_size": 4096, 00:28:45.097 "runtime": 1.055598, 00:28:45.097 "iops": 1566.8843631761333, 00:28:45.097 "mibps": 6.120642043656771, 00:28:45.097 "io_failed": 0, 00:28:45.097 "io_timeout": 0, 00:28:45.097 "avg_latency_us": 80063.74834520131, 00:28:45.097 "min_latency_us": 15437.368888888888, 00:28:45.097 "max_latency_us": 59807.66814814815 00:28:45.097 } 00:28:45.097 ], 00:28:45.097 "core_count": 1 00:28:45.097 } 00:28:45.097 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:28:45.097 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.097 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:45.097 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.097 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:28:45.097 "subsystems": [ 00:28:45.097 { 00:28:45.097 "subsystem": "keyring", 00:28:45.097 "config": [ 00:28:45.097 { 00:28:45.097 "method": "keyring_file_add_key", 00:28:45.097 "params": { 00:28:45.097 "name": "key0", 00:28:45.097 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:45.097 } 00:28:45.097 } 00:28:45.097 ] 00:28:45.097 }, 00:28:45.097 { 00:28:45.097 "subsystem": "iobuf", 00:28:45.097 "config": [ 00:28:45.097 { 00:28:45.097 "method": "iobuf_set_options", 00:28:45.097 "params": { 00:28:45.097 "small_pool_count": 8192, 00:28:45.097 "large_pool_count": 1024, 00:28:45.097 "small_bufsize": 8192, 00:28:45.097 "large_bufsize": 135168 00:28:45.097 } 00:28:45.097 } 00:28:45.097 ] 00:28:45.097 }, 00:28:45.097 { 00:28:45.097 "subsystem": "sock", 00:28:45.097 "config": [ 00:28:45.097 { 00:28:45.097 "method": "sock_set_default_impl", 00:28:45.097 "params": { 00:28:45.097 "impl_name": "posix" 00:28:45.097 } 00:28:45.097 }, 00:28:45.097 { 00:28:45.097 "method": "sock_impl_set_options", 00:28:45.097 "params": { 00:28:45.097 "impl_name": "ssl", 00:28:45.097 "recv_buf_size": 4096, 00:28:45.097 "send_buf_size": 4096, 00:28:45.097 "enable_recv_pipe": true, 00:28:45.097 "enable_quickack": false, 00:28:45.097 "enable_placement_id": 0, 00:28:45.097 "enable_zerocopy_send_server": true, 00:28:45.097 "enable_zerocopy_send_client": false, 00:28:45.097 "zerocopy_threshold": 0, 00:28:45.097 "tls_version": 0, 00:28:45.097 "enable_ktls": false 00:28:45.097 } 00:28:45.097 }, 00:28:45.097 { 00:28:45.097 "method": "sock_impl_set_options", 00:28:45.097 "params": { 00:28:45.097 "impl_name": "posix", 00:28:45.097 "recv_buf_size": 2097152, 00:28:45.097 "send_buf_size": 2097152, 00:28:45.097 "enable_recv_pipe": true, 00:28:45.097 "enable_quickack": false, 00:28:45.097 "enable_placement_id": 0, 00:28:45.098 "enable_zerocopy_send_server": true, 00:28:45.098 "enable_zerocopy_send_client": false, 00:28:45.098 "zerocopy_threshold": 0, 00:28:45.098 "tls_version": 0, 00:28:45.098 "enable_ktls": false 00:28:45.098 } 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "vmd", 00:28:45.098 "config": [] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "accel", 00:28:45.098 "config": [ 00:28:45.098 { 00:28:45.098 "method": "accel_set_options", 00:28:45.098 "params": { 00:28:45.098 "small_cache_size": 128, 00:28:45.098 "large_cache_size": 16, 00:28:45.098 "task_count": 2048, 00:28:45.098 "sequence_count": 2048, 00:28:45.098 "buf_count": 2048 00:28:45.098 } 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "bdev", 00:28:45.098 "config": [ 00:28:45.098 { 00:28:45.098 "method": "bdev_set_options", 00:28:45.098 "params": { 00:28:45.098 "bdev_io_pool_size": 65535, 00:28:45.098 "bdev_io_cache_size": 256, 00:28:45.098 "bdev_auto_examine": true, 00:28:45.098 "iobuf_small_cache_size": 128, 00:28:45.098 "iobuf_large_cache_size": 16 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_raid_set_options", 00:28:45.098 "params": { 00:28:45.098 "process_window_size_kb": 1024, 00:28:45.098 "process_max_bandwidth_mb_sec": 0 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_iscsi_set_options", 00:28:45.098 "params": { 00:28:45.098 "timeout_sec": 30 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_nvme_set_options", 00:28:45.098 "params": { 00:28:45.098 "action_on_timeout": "none", 00:28:45.098 "timeout_us": 0, 00:28:45.098 "timeout_admin_us": 0, 00:28:45.098 "keep_alive_timeout_ms": 10000, 00:28:45.098 "arbitration_burst": 0, 00:28:45.098 "low_priority_weight": 0, 00:28:45.098 "medium_priority_weight": 0, 00:28:45.098 "high_priority_weight": 0, 00:28:45.098 "nvme_adminq_poll_period_us": 10000, 00:28:45.098 "nvme_ioq_poll_period_us": 0, 00:28:45.098 "io_queue_requests": 0, 00:28:45.098 "delay_cmd_submit": true, 00:28:45.098 "transport_retry_count": 4, 00:28:45.098 "bdev_retry_count": 3, 00:28:45.098 "transport_ack_timeout": 0, 00:28:45.098 "ctrlr_loss_timeout_sec": 0, 00:28:45.098 "reconnect_delay_sec": 0, 00:28:45.098 "fast_io_fail_timeout_sec": 0, 00:28:45.098 "disable_auto_failback": false, 00:28:45.098 "generate_uuids": false, 00:28:45.098 "transport_tos": 0, 00:28:45.098 "nvme_error_stat": false, 00:28:45.098 "rdma_srq_size": 0, 00:28:45.098 "io_path_stat": false, 00:28:45.098 "allow_accel_sequence": false, 00:28:45.098 "rdma_max_cq_size": 0, 00:28:45.098 "rdma_cm_event_timeout_ms": 0, 00:28:45.098 "dhchap_digests": [ 00:28:45.098 "sha256", 00:28:45.098 "sha384", 00:28:45.098 "sha512" 00:28:45.098 ], 00:28:45.098 "dhchap_dhgroups": [ 00:28:45.098 "null", 00:28:45.098 "ffdhe2048", 00:28:45.098 "ffdhe3072", 00:28:45.098 "ffdhe4096", 00:28:45.098 "ffdhe6144", 00:28:45.098 "ffdhe8192" 00:28:45.098 ] 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_nvme_set_hotplug", 00:28:45.098 "params": { 00:28:45.098 "period_us": 100000, 00:28:45.098 "enable": false 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_malloc_create", 00:28:45.098 "params": { 00:28:45.098 "name": "malloc0", 00:28:45.098 "num_blocks": 8192, 00:28:45.098 "block_size": 4096, 00:28:45.098 "physical_block_size": 4096, 00:28:45.098 "uuid": "723bfc0e-92c7-4277-a3fe-678f7d98130a", 00:28:45.098 "optimal_io_boundary": 0, 00:28:45.098 "md_size": 0, 00:28:45.098 "dif_type": 0, 00:28:45.098 "dif_is_head_of_md": false, 00:28:45.098 "dif_pi_format": 0 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "bdev_wait_for_examine" 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "nbd", 00:28:45.098 "config": [] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "scheduler", 00:28:45.098 "config": [ 00:28:45.098 { 00:28:45.098 "method": "framework_set_scheduler", 00:28:45.098 "params": { 00:28:45.098 "name": "static" 00:28:45.098 } 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "subsystem": "nvmf", 00:28:45.098 "config": [ 00:28:45.098 { 00:28:45.098 "method": "nvmf_set_config", 00:28:45.098 "params": { 00:28:45.098 "discovery_filter": "match_any", 00:28:45.098 "admin_cmd_passthru": { 00:28:45.098 "identify_ctrlr": false 00:28:45.098 }, 00:28:45.098 "dhchap_digests": [ 00:28:45.098 "sha256", 00:28:45.098 "sha384", 00:28:45.098 "sha512" 00:28:45.098 ], 00:28:45.098 "dhchap_dhgroups": [ 00:28:45.098 "null", 00:28:45.098 "ffdhe2048", 00:28:45.098 "ffdhe3072", 00:28:45.098 "ffdhe4096", 00:28:45.098 "ffdhe6144", 00:28:45.098 "ffdhe8192" 00:28:45.098 ] 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_set_max_subsystems", 00:28:45.098 "params": { 00:28:45.098 "max_subsystems": 1024 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_set_crdt", 00:28:45.098 "params": { 00:28:45.098 "crdt1": 0, 00:28:45.098 "crdt2": 0, 00:28:45.098 "crdt3": 0 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_create_transport", 00:28:45.098 "params": { 00:28:45.098 "trtype": "TCP", 00:28:45.098 "max_queue_depth": 128, 00:28:45.098 "max_io_qpairs_per_ctrlr": 127, 00:28:45.098 "in_capsule_data_size": 4096, 00:28:45.098 "max_io_size": 131072, 00:28:45.098 "io_unit_size": 131072, 00:28:45.098 "max_aq_depth": 128, 00:28:45.098 "num_shared_buffers": 511, 00:28:45.098 "buf_cache_size": 4294967295, 00:28:45.098 "dif_insert_or_strip": false, 00:28:45.098 "zcopy": false, 00:28:45.098 "c2h_success": false, 00:28:45.098 "sock_priority": 0, 00:28:45.098 "abort_timeout_sec": 1, 00:28:45.098 "ack_timeout": 0, 00:28:45.098 "data_wr_pool_size": 0 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_create_subsystem", 00:28:45.098 "params": { 00:28:45.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.098 "allow_any_host": false, 00:28:45.098 "serial_number": "00000000000000000000", 00:28:45.098 "model_number": "SPDK bdev Controller", 00:28:45.098 "max_namespaces": 32, 00:28:45.098 "min_cntlid": 1, 00:28:45.098 "max_cntlid": 65519, 00:28:45.098 "ana_reporting": false 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_subsystem_add_host", 00:28:45.098 "params": { 00:28:45.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.098 "host": "nqn.2016-06.io.spdk:host1", 00:28:45.098 "psk": "key0" 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_subsystem_add_ns", 00:28:45.098 "params": { 00:28:45.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.098 "namespace": { 00:28:45.098 "nsid": 1, 00:28:45.098 "bdev_name": "malloc0", 00:28:45.098 "nguid": "723BFC0E92C74277A3FE678F7D98130A", 00:28:45.098 "uuid": "723bfc0e-92c7-4277-a3fe-678f7d98130a", 00:28:45.098 "no_auto_visible": false 00:28:45.098 } 00:28:45.098 } 00:28:45.098 }, 00:28:45.098 { 00:28:45.098 "method": "nvmf_subsystem_add_listener", 00:28:45.098 "params": { 00:28:45.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.098 "listen_address": { 00:28:45.098 "trtype": "TCP", 00:28:45.098 "adrfam": "IPv4", 00:28:45.098 "traddr": "10.0.0.2", 00:28:45.098 "trsvcid": "4420" 00:28:45.098 }, 00:28:45.098 "secure_channel": false, 00:28:45.098 "sock_impl": "ssl" 00:28:45.098 } 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 } 00:28:45.098 ] 00:28:45.098 }' 00:28:45.098 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:28:45.357 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:28:45.357 "subsystems": [ 00:28:45.357 { 00:28:45.357 "subsystem": "keyring", 00:28:45.357 "config": [ 00:28:45.357 { 00:28:45.357 "method": "keyring_file_add_key", 00:28:45.357 "params": { 00:28:45.357 "name": "key0", 00:28:45.357 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:45.357 } 00:28:45.357 } 00:28:45.357 ] 00:28:45.357 }, 00:28:45.357 { 00:28:45.357 "subsystem": "iobuf", 00:28:45.357 "config": [ 00:28:45.357 { 00:28:45.357 "method": "iobuf_set_options", 00:28:45.357 "params": { 00:28:45.357 "small_pool_count": 8192, 00:28:45.357 "large_pool_count": 1024, 00:28:45.357 "small_bufsize": 8192, 00:28:45.357 "large_bufsize": 135168 00:28:45.357 } 00:28:45.357 } 00:28:45.357 ] 00:28:45.357 }, 00:28:45.357 { 00:28:45.357 "subsystem": "sock", 00:28:45.357 "config": [ 00:28:45.357 { 00:28:45.357 "method": "sock_set_default_impl", 00:28:45.357 "params": { 00:28:45.357 "impl_name": "posix" 00:28:45.357 } 00:28:45.357 }, 00:28:45.357 { 00:28:45.357 "method": "sock_impl_set_options", 00:28:45.357 "params": { 00:28:45.357 "impl_name": "ssl", 00:28:45.357 "recv_buf_size": 4096, 00:28:45.357 "send_buf_size": 4096, 00:28:45.357 "enable_recv_pipe": true, 00:28:45.357 "enable_quickack": false, 00:28:45.357 "enable_placement_id": 0, 00:28:45.357 "enable_zerocopy_send_server": true, 00:28:45.357 "enable_zerocopy_send_client": false, 00:28:45.357 "zerocopy_threshold": 0, 00:28:45.357 "tls_version": 0, 00:28:45.357 "enable_ktls": false 00:28:45.357 } 00:28:45.357 }, 00:28:45.357 { 00:28:45.357 "method": "sock_impl_set_options", 00:28:45.357 "params": { 00:28:45.357 "impl_name": "posix", 00:28:45.357 "recv_buf_size": 2097152, 00:28:45.357 "send_buf_size": 2097152, 00:28:45.357 "enable_recv_pipe": true, 00:28:45.357 "enable_quickack": false, 00:28:45.357 "enable_placement_id": 0, 00:28:45.357 "enable_zerocopy_send_server": true, 00:28:45.357 "enable_zerocopy_send_client": false, 00:28:45.358 "zerocopy_threshold": 0, 00:28:45.358 "tls_version": 0, 00:28:45.358 "enable_ktls": false 00:28:45.358 } 00:28:45.358 } 00:28:45.358 ] 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "subsystem": "vmd", 00:28:45.358 "config": [] 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "subsystem": "accel", 00:28:45.358 "config": [ 00:28:45.358 { 00:28:45.358 "method": "accel_set_options", 00:28:45.358 "params": { 00:28:45.358 "small_cache_size": 128, 00:28:45.358 "large_cache_size": 16, 00:28:45.358 "task_count": 2048, 00:28:45.358 "sequence_count": 2048, 00:28:45.358 "buf_count": 2048 00:28:45.358 } 00:28:45.358 } 00:28:45.358 ] 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "subsystem": "bdev", 00:28:45.358 "config": [ 00:28:45.358 { 00:28:45.358 "method": "bdev_set_options", 00:28:45.358 "params": { 00:28:45.358 "bdev_io_pool_size": 65535, 00:28:45.358 "bdev_io_cache_size": 256, 00:28:45.358 "bdev_auto_examine": true, 00:28:45.358 "iobuf_small_cache_size": 128, 00:28:45.358 "iobuf_large_cache_size": 16 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_raid_set_options", 00:28:45.358 "params": { 00:28:45.358 "process_window_size_kb": 1024, 00:28:45.358 "process_max_bandwidth_mb_sec": 0 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_iscsi_set_options", 00:28:45.358 "params": { 00:28:45.358 "timeout_sec": 30 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_nvme_set_options", 00:28:45.358 "params": { 00:28:45.358 "action_on_timeout": "none", 00:28:45.358 "timeout_us": 0, 00:28:45.358 "timeout_admin_us": 0, 00:28:45.358 "keep_alive_timeout_ms": 10000, 00:28:45.358 "arbitration_burst": 0, 00:28:45.358 "low_priority_weight": 0, 00:28:45.358 "medium_priority_weight": 0, 00:28:45.358 "high_priority_weight": 0, 00:28:45.358 "nvme_adminq_poll_period_us": 10000, 00:28:45.358 "nvme_ioq_poll_period_us": 0, 00:28:45.358 "io_queue_requests": 512, 00:28:45.358 "delay_cmd_submit": true, 00:28:45.358 "transport_retry_count": 4, 00:28:45.358 "bdev_retry_count": 3, 00:28:45.358 "transport_ack_timeout": 0, 00:28:45.358 "ctrlr_loss_timeout_sec": 0, 00:28:45.358 "reconnect_delay_sec": 0, 00:28:45.358 "fast_io_fail_timeout_sec": 0, 00:28:45.358 "disable_auto_failback": false, 00:28:45.358 "generate_uuids": false, 00:28:45.358 "transport_tos": 0, 00:28:45.358 "nvme_error_stat": false, 00:28:45.358 "rdma_srq_size": 0, 00:28:45.358 "io_path_stat": false, 00:28:45.358 "allow_accel_sequence": false, 00:28:45.358 "rdma_max_cq_size": 0, 00:28:45.358 "rdma_cm_event_timeout_ms": 0, 00:28:45.358 "dhchap_digests": [ 00:28:45.358 "sha256", 00:28:45.358 "sha384", 00:28:45.358 "sha512" 00:28:45.358 ], 00:28:45.358 "dhchap_dhgroups": [ 00:28:45.358 "null", 00:28:45.358 "ffdhe2048", 00:28:45.358 "ffdhe3072", 00:28:45.358 "ffdhe4096", 00:28:45.358 "ffdhe6144", 00:28:45.358 "ffdhe8192" 00:28:45.358 ] 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_nvme_attach_controller", 00:28:45.358 "params": { 00:28:45.358 "name": "nvme0", 00:28:45.358 "trtype": "TCP", 00:28:45.358 "adrfam": "IPv4", 00:28:45.358 "traddr": "10.0.0.2", 00:28:45.358 "trsvcid": "4420", 00:28:45.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.358 "prchk_reftag": false, 00:28:45.358 "prchk_guard": false, 00:28:45.358 "ctrlr_loss_timeout_sec": 0, 00:28:45.358 "reconnect_delay_sec": 0, 00:28:45.358 "fast_io_fail_timeout_sec": 0, 00:28:45.358 "psk": "key0", 00:28:45.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.358 "hdgst": false, 00:28:45.358 "ddgst": false, 00:28:45.358 "multipath": "multipath" 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_nvme_set_hotplug", 00:28:45.358 "params": { 00:28:45.358 "period_us": 100000, 00:28:45.358 "enable": false 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_enable_histogram", 00:28:45.358 "params": { 00:28:45.358 "name": "nvme0n1", 00:28:45.358 "enable": true 00:28:45.358 } 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "method": "bdev_wait_for_examine" 00:28:45.358 } 00:28:45.358 ] 00:28:45.358 }, 00:28:45.358 { 00:28:45.358 "subsystem": "nbd", 00:28:45.358 "config": [] 00:28:45.358 } 00:28:45.358 ] 00:28:45.358 }' 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1770037 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1770037 ']' 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1770037 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770037 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770037' 00:28:45.358 killing process with pid 1770037 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1770037 00:28:45.358 Received shutdown signal, test time was about 1.000000 seconds 00:28:45.358 00:28:45.358 Latency(us) 00:28:45.358 [2024-10-08T18:56:14.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.358 [2024-10-08T18:56:14.121Z] =================================================================================================================== 00:28:45.358 [2024-10-08T18:56:14.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.358 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1770037 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1769984 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1769984 ']' 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1769984 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1769984 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1769984' 00:28:45.927 killing process with pid 1769984 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1769984 00:28:45.927 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1769984 00:28:46.495 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:28:46.495 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:46.495 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.495 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:46.495 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:28:46.495 "subsystems": [ 00:28:46.495 { 00:28:46.495 "subsystem": "keyring", 00:28:46.495 "config": [ 00:28:46.495 { 00:28:46.495 "method": "keyring_file_add_key", 00:28:46.495 "params": { 00:28:46.495 "name": "key0", 00:28:46.495 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:46.495 } 00:28:46.495 } 00:28:46.495 ] 00:28:46.495 }, 00:28:46.495 { 00:28:46.495 "subsystem": "iobuf", 00:28:46.495 "config": [ 00:28:46.495 { 00:28:46.495 "method": "iobuf_set_options", 00:28:46.495 "params": { 00:28:46.495 "small_pool_count": 8192, 00:28:46.495 "large_pool_count": 1024, 00:28:46.495 "small_bufsize": 8192, 00:28:46.495 "large_bufsize": 135168 00:28:46.495 } 00:28:46.495 } 00:28:46.495 ] 00:28:46.495 }, 00:28:46.495 { 00:28:46.495 "subsystem": "sock", 00:28:46.495 "config": [ 00:28:46.495 { 00:28:46.495 "method": "sock_set_default_impl", 00:28:46.495 "params": { 00:28:46.495 "impl_name": "posix" 00:28:46.495 } 00:28:46.495 }, 00:28:46.495 { 00:28:46.495 "method": "sock_impl_set_options", 00:28:46.495 "params": { 00:28:46.495 "impl_name": "ssl", 00:28:46.495 "recv_buf_size": 4096, 00:28:46.495 "send_buf_size": 4096, 00:28:46.495 "enable_recv_pipe": true, 00:28:46.495 "enable_quickack": false, 00:28:46.495 "enable_placement_id": 0, 00:28:46.495 "enable_zerocopy_send_server": true, 00:28:46.495 "enable_zerocopy_send_client": false, 00:28:46.495 "zerocopy_threshold": 0, 00:28:46.495 "tls_version": 0, 00:28:46.495 "enable_ktls": false 00:28:46.495 } 00:28:46.495 }, 00:28:46.495 { 00:28:46.495 "method": "sock_impl_set_options", 00:28:46.495 "params": { 00:28:46.495 "impl_name": "posix", 00:28:46.495 "recv_buf_size": 2097152, 00:28:46.495 "send_buf_size": 2097152, 00:28:46.495 "enable_recv_pipe": true, 00:28:46.496 "enable_quickack": false, 00:28:46.496 "enable_placement_id": 0, 00:28:46.496 "enable_zerocopy_send_server": true, 00:28:46.496 "enable_zerocopy_send_client": false, 00:28:46.496 "zerocopy_threshold": 0, 00:28:46.496 "tls_version": 0, 00:28:46.496 "enable_ktls": false 00:28:46.496 } 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "vmd", 00:28:46.496 "config": [] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "accel", 00:28:46.496 "config": [ 00:28:46.496 { 00:28:46.496 "method": "accel_set_options", 00:28:46.496 "params": { 00:28:46.496 "small_cache_size": 128, 00:28:46.496 "large_cache_size": 16, 00:28:46.496 "task_count": 2048, 00:28:46.496 "sequence_count": 2048, 00:28:46.496 "buf_count": 2048 00:28:46.496 } 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "bdev", 00:28:46.496 "config": [ 00:28:46.496 { 00:28:46.496 "method": "bdev_set_options", 00:28:46.496 "params": { 00:28:46.496 "bdev_io_pool_size": 65535, 00:28:46.496 "bdev_io_cache_size": 256, 00:28:46.496 "bdev_auto_examine": true, 00:28:46.496 "iobuf_small_cache_size": 128, 00:28:46.496 "iobuf_large_cache_size": 16 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_raid_set_options", 00:28:46.496 "params": { 00:28:46.496 "process_window_size_kb": 1024, 00:28:46.496 "process_max_bandwidth_mb_sec": 0 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_iscsi_set_options", 00:28:46.496 "params": { 00:28:46.496 "timeout_sec": 30 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_nvme_set_options", 00:28:46.496 "params": { 00:28:46.496 "action_on_timeout": "none", 00:28:46.496 "timeout_us": 0, 00:28:46.496 "timeout_admin_us": 0, 00:28:46.496 "keep_alive_timeout_ms": 10000, 00:28:46.496 "arbitration_burst": 0, 00:28:46.496 "low_priority_weight": 0, 00:28:46.496 "medium_priority_weight": 0, 00:28:46.496 "high_priority_weight": 0, 00:28:46.496 "nvme_adminq_poll_period_us": 10000, 00:28:46.496 "nvme_ioq_poll_period_us": 0, 00:28:46.496 "io_queue_requests": 0, 00:28:46.496 "delay_cmd_submit": true, 00:28:46.496 "transport_retry_count": 4, 00:28:46.496 "bdev_retry_count": 3, 00:28:46.496 "transport_ack_timeout": 0, 00:28:46.496 "ctrlr_loss_timeout_sec": 0, 00:28:46.496 "reconnect_delay_sec": 0, 00:28:46.496 "fast_io_fail_timeout_sec": 0, 00:28:46.496 "disable_auto_failback": false, 00:28:46.496 "generate_uuids": false, 00:28:46.496 "transport_tos": 0, 00:28:46.496 "nvme_error_stat": false, 00:28:46.496 "rdma_srq_size": 0, 00:28:46.496 "io_path_stat": false, 00:28:46.496 "allow_accel_sequence": false, 00:28:46.496 "rdma_max_cq_size": 0, 00:28:46.496 "rdma_cm_event_timeout_ms": 0, 00:28:46.496 "dhchap_digests": [ 00:28:46.496 "sha256", 00:28:46.496 "sha384", 00:28:46.496 "sha512" 00:28:46.496 ], 00:28:46.496 "dhchap_dhgroups": [ 00:28:46.496 "null", 00:28:46.496 "ffdhe2048", 00:28:46.496 "ffdhe3072", 00:28:46.496 "ffdhe4096", 00:28:46.496 "ffdhe6144", 00:28:46.496 "ffdhe8192" 00:28:46.496 ] 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_nvme_set_hotplug", 00:28:46.496 "params": { 00:28:46.496 "period_us": 100000, 00:28:46.496 "enable": false 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_malloc_create", 00:28:46.496 "params": { 00:28:46.496 "name": "malloc0", 00:28:46.496 "num_blocks": 8192, 00:28:46.496 "block_size": 4096, 00:28:46.496 "physical_block_size": 4096, 00:28:46.496 "uuid": "723bfc0e-92c7-4277-a3fe-678f7d98130a", 00:28:46.496 "optimal_io_boundary": 0, 00:28:46.496 "md_size": 0, 00:28:46.496 "dif_type": 0, 00:28:46.496 "dif_is_head_of_md": false, 00:28:46.496 "dif_pi_format": 0 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "bdev_wait_for_examine" 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "nbd", 00:28:46.496 "config": [] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "scheduler", 00:28:46.496 "config": [ 00:28:46.496 { 00:28:46.496 "method": "framework_set_scheduler", 00:28:46.496 "params": { 00:28:46.496 "name": "static" 00:28:46.496 } 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "subsystem": "nvmf", 00:28:46.496 "config": [ 00:28:46.496 { 00:28:46.496 "method": "nvmf_set_config", 00:28:46.496 "params": { 00:28:46.496 "discovery_filter": "match_any", 00:28:46.496 "admin_cmd_passthru": { 00:28:46.496 "identify_ctrlr": false 00:28:46.496 }, 00:28:46.496 "dhchap_digests": [ 00:28:46.496 "sha256", 00:28:46.496 "sha384", 00:28:46.496 "sha512" 00:28:46.496 ], 00:28:46.496 "dhchap_dhgroups": [ 00:28:46.496 "null", 00:28:46.496 "ffdhe2048", 00:28:46.496 "ffdhe3072", 00:28:46.496 "ffdhe4096", 00:28:46.496 "ffdhe6144", 00:28:46.496 "ffdhe8192" 00:28:46.496 ] 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_set_max_subsystems", 00:28:46.496 "params": { 00:28:46.496 "max_subsystems": 1024 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_set_crdt", 00:28:46.496 "params": { 00:28:46.496 "crdt1": 0, 00:28:46.496 "crdt2": 0, 00:28:46.496 "crdt3": 0 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_create_transport", 00:28:46.496 "params": { 00:28:46.496 "trtype": "TCP", 00:28:46.496 "max_queue_depth": 128, 00:28:46.496 "max_io_qpairs_per_ctrlr": 127, 00:28:46.496 "in_capsule_data_size": 4096, 00:28:46.496 "max_io_size": 131072, 00:28:46.496 "io_unit_size": 131072, 00:28:46.496 "max_aq_depth": 128, 00:28:46.496 "num_shared_buffers": 511, 00:28:46.496 "buf_cache_size": 4294967295, 00:28:46.496 "dif_insert_or_strip": false, 00:28:46.496 "zcopy": false, 00:28:46.496 "c2h_success": false, 00:28:46.496 "sock_priority": 0, 00:28:46.496 "abort_timeout_sec": 1, 00:28:46.496 "ack_timeout": 0, 00:28:46.496 "data_wr_pool_size": 0 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_create_subsystem", 00:28:46.496 "params": { 00:28:46.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.496 "allow_any_host": false, 00:28:46.496 "serial_number": "00000000000000000000", 00:28:46.496 "model_number": "SPDK bdev Controller", 00:28:46.496 "max_namespaces": 32, 00:28:46.496 "min_cntlid": 1, 00:28:46.496 "max_cntlid": 65519, 00:28:46.496 "ana_reporting": false 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_subsystem_add_host", 00:28:46.496 "params": { 00:28:46.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.496 "host": "nqn.2016-06.io.spdk:host1", 00:28:46.496 "psk": "key0" 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_subsystem_add_ns", 00:28:46.496 "params": { 00:28:46.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.496 "namespace": { 00:28:46.496 "nsid": 1, 00:28:46.496 "bdev_name": "malloc0", 00:28:46.496 "nguid": "723BFC0E92C74277A3FE678F7D98130A", 00:28:46.496 "uuid": "723bfc0e-92c7-4277-a3fe-678f7d98130a", 00:28:46.496 "no_auto_visible": false 00:28:46.496 } 00:28:46.496 } 00:28:46.496 }, 00:28:46.496 { 00:28:46.496 "method": "nvmf_subsystem_add_listener", 00:28:46.496 "params": { 00:28:46.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.496 "listen_address": { 00:28:46.496 "trtype": "TCP", 00:28:46.496 "adrfam": "IPv4", 00:28:46.496 "traddr": "10.0.0.2", 00:28:46.496 "trsvcid": "4420" 00:28:46.496 }, 00:28:46.496 "secure_channel": false, 00:28:46.496 "sock_impl": "ssl" 00:28:46.496 } 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 } 00:28:46.496 ] 00:28:46.496 }' 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1770616 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1770616 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1770616 ']' 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.496 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:46.496 [2024-10-08 20:56:15.116565] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:46.496 [2024-10-08 20:56:15.116676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.757 [2024-10-08 20:56:15.260444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.757 [2024-10-08 20:56:15.483026] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.757 [2024-10-08 20:56:15.483149] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.757 [2024-10-08 20:56:15.483186] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.757 [2024-10-08 20:56:15.483217] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.757 [2024-10-08 20:56:15.483242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.757 [2024-10-08 20:56:15.484713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.328 [2024-10-08 20:56:15.839604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.328 [2024-10-08 20:56:15.872751] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:47.328 [2024-10-08 20:56:15.873237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1770701 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1770701 /var/tmp/bdevperf.sock 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1770701 ']' 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:47.328 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:28:47.328 "subsystems": [ 00:28:47.328 { 00:28:47.328 "subsystem": "keyring", 00:28:47.328 "config": [ 00:28:47.328 { 00:28:47.328 "method": "keyring_file_add_key", 00:28:47.328 "params": { 00:28:47.328 "name": "key0", 00:28:47.328 "path": "/tmp/tmp.N0kPLRkMVb" 00:28:47.328 } 00:28:47.328 } 00:28:47.328 ] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "iobuf", 00:28:47.328 "config": [ 00:28:47.328 { 00:28:47.328 "method": "iobuf_set_options", 00:28:47.328 "params": { 00:28:47.328 "small_pool_count": 8192, 00:28:47.328 "large_pool_count": 1024, 00:28:47.328 "small_bufsize": 8192, 00:28:47.328 "large_bufsize": 135168 00:28:47.328 } 00:28:47.328 } 00:28:47.328 ] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "sock", 00:28:47.328 "config": [ 00:28:47.328 { 00:28:47.328 "method": "sock_set_default_impl", 00:28:47.328 "params": { 00:28:47.328 "impl_name": "posix" 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "sock_impl_set_options", 00:28:47.328 "params": { 00:28:47.328 "impl_name": "ssl", 00:28:47.328 "recv_buf_size": 4096, 00:28:47.328 "send_buf_size": 4096, 00:28:47.328 "enable_recv_pipe": true, 00:28:47.328 "enable_quickack": false, 00:28:47.328 "enable_placement_id": 0, 00:28:47.328 "enable_zerocopy_send_server": true, 00:28:47.328 "enable_zerocopy_send_client": false, 00:28:47.328 "zerocopy_threshold": 0, 00:28:47.328 "tls_version": 0, 00:28:47.328 "enable_ktls": false 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "sock_impl_set_options", 00:28:47.328 "params": { 00:28:47.328 "impl_name": "posix", 00:28:47.328 "recv_buf_size": 2097152, 00:28:47.328 "send_buf_size": 2097152, 00:28:47.328 "enable_recv_pipe": true, 00:28:47.328 "enable_quickack": false, 00:28:47.328 "enable_placement_id": 0, 00:28:47.328 "enable_zerocopy_send_server": true, 00:28:47.328 "enable_zerocopy_send_client": false, 00:28:47.328 "zerocopy_threshold": 0, 00:28:47.328 "tls_version": 0, 00:28:47.328 "enable_ktls": false 00:28:47.328 } 00:28:47.328 } 00:28:47.328 ] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "vmd", 00:28:47.328 "config": [] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "accel", 00:28:47.328 "config": [ 00:28:47.328 { 00:28:47.328 "method": "accel_set_options", 00:28:47.328 "params": { 00:28:47.328 "small_cache_size": 128, 00:28:47.328 "large_cache_size": 16, 00:28:47.328 "task_count": 2048, 00:28:47.328 "sequence_count": 2048, 00:28:47.328 "buf_count": 2048 00:28:47.328 } 00:28:47.328 } 00:28:47.328 ] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "bdev", 00:28:47.328 "config": [ 00:28:47.328 { 00:28:47.328 "method": "bdev_set_options", 00:28:47.328 "params": { 00:28:47.328 "bdev_io_pool_size": 65535, 00:28:47.328 "bdev_io_cache_size": 256, 00:28:47.328 "bdev_auto_examine": true, 00:28:47.328 "iobuf_small_cache_size": 128, 00:28:47.328 "iobuf_large_cache_size": 16 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_raid_set_options", 00:28:47.328 "params": { 00:28:47.328 "process_window_size_kb": 1024, 00:28:47.328 "process_max_bandwidth_mb_sec": 0 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_iscsi_set_options", 00:28:47.328 "params": { 00:28:47.328 "timeout_sec": 30 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_nvme_set_options", 00:28:47.328 "params": { 00:28:47.328 "action_on_timeout": "none", 00:28:47.328 "timeout_us": 0, 00:28:47.328 "timeout_admin_us": 0, 00:28:47.328 "keep_alive_timeout_ms": 10000, 00:28:47.328 "arbitration_burst": 0, 00:28:47.328 "low_priority_weight": 0, 00:28:47.328 "medium_priority_weight": 0, 00:28:47.328 "high_priority_weight": 0, 00:28:47.328 "nvme_adminq_poll_period_us": 10000, 00:28:47.328 "nvme_ioq_poll_period_us": 0, 00:28:47.328 "io_queue_requests": 512, 00:28:47.328 "delay_cmd_submit": true, 00:28:47.328 "transport_retry_count": 4, 00:28:47.328 "bdev_retry_count": 3, 00:28:47.328 "transport_ack_timeout": 0, 00:28:47.328 "ctrlr_loss_timeout_sec": 0, 00:28:47.328 "reconnect_delay_sec": 0, 00:28:47.328 "fast_io_fail_timeout_sec": 0, 00:28:47.328 "disable_auto_failback": false, 00:28:47.328 "generate_uuids": false, 00:28:47.328 "transport_tos": 0, 00:28:47.328 "nvme_error_stat": false, 00:28:47.328 "rdma_srq_size": 0, 00:28:47.328 "io_path_stat": false, 00:28:47.328 "allow_accel_sequence": false, 00:28:47.328 "rdma_max_cq_size": 0, 00:28:47.328 "rdma_cm_event_timeout_ms": 0, 00:28:47.328 "dhchap_digests": [ 00:28:47.328 "sha256", 00:28:47.328 "sha384", 00:28:47.328 "sha512" 00:28:47.328 ], 00:28:47.328 "dhchap_dhgroups": [ 00:28:47.328 "null", 00:28:47.328 "ffdhe2048", 00:28:47.328 "ffdhe3072", 00:28:47.328 "ffdhe4096", 00:28:47.328 "ffdhe6144", 00:28:47.328 "ffdhe8192" 00:28:47.328 ] 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_nvme_attach_controller", 00:28:47.328 "params": { 00:28:47.328 "name": "nvme0", 00:28:47.328 "trtype": "TCP", 00:28:47.328 "adrfam": "IPv4", 00:28:47.328 "traddr": "10.0.0.2", 00:28:47.328 "trsvcid": "4420", 00:28:47.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.328 "prchk_reftag": false, 00:28:47.328 "prchk_guard": false, 00:28:47.328 "ctrlr_loss_timeout_sec": 0, 00:28:47.328 "reconnect_delay_sec": 0, 00:28:47.328 "fast_io_fail_timeout_sec": 0, 00:28:47.328 "psk": "key0", 00:28:47.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.328 "hdgst": false, 00:28:47.328 "ddgst": false, 00:28:47.328 "multipath": "multipath" 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_nvme_set_hotplug", 00:28:47.328 "params": { 00:28:47.328 "period_us": 100000, 00:28:47.328 "enable": false 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_enable_histogram", 00:28:47.328 "params": { 00:28:47.328 "name": "nvme0n1", 00:28:47.328 "enable": true 00:28:47.328 } 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "method": "bdev_wait_for_examine" 00:28:47.328 } 00:28:47.328 ] 00:28:47.328 }, 00:28:47.328 { 00:28:47.328 "subsystem": "nbd", 00:28:47.328 "config": [] 00:28:47.328 } 00:28:47.328 ] 00:28:47.329 }' 00:28:47.329 [2024-10-08 20:56:15.995304] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:47.329 [2024-10-08 20:56:15.995405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770701 ] 00:28:47.589 [2024-10-08 20:56:16.103533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.589 [2024-10-08 20:56:16.326055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.850 [2024-10-08 20:56:16.591399] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:48.789 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.789 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:28:48.789 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:48.789 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:28:49.047 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.047 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:49.307 Running I/O for 1 seconds... 00:28:50.245 1421.00 IOPS, 5.55 MiB/s 00:28:50.245 Latency(us) 00:28:50.245 [2024-10-08T18:56:19.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.245 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:50.245 Verification LBA range: start 0x0 length 0x2000 00:28:50.245 nvme0n1 : 1.04 1486.37 5.81 0.00 0.00 84608.59 7718.68 60196.03 00:28:50.245 [2024-10-08T18:56:19.008Z] =================================================================================================================== 00:28:50.245 [2024-10-08T18:56:19.008Z] Total : 1486.37 5.81 0.00 0.00 84608.59 7718.68 60196.03 00:28:50.245 { 00:28:50.245 "results": [ 00:28:50.245 { 00:28:50.245 "job": "nvme0n1", 00:28:50.246 "core_mask": "0x2", 00:28:50.246 "workload": "verify", 00:28:50.246 "status": "finished", 00:28:50.246 "verify_range": { 00:28:50.246 "start": 0, 00:28:50.246 "length": 8192 00:28:50.246 }, 00:28:50.246 "queue_depth": 128, 00:28:50.246 "io_size": 4096, 00:28:50.246 "runtime": 1.042135, 00:28:50.246 "iops": 1486.3717272714189, 00:28:50.246 "mibps": 5.80613955965398, 00:28:50.246 "io_failed": 0, 00:28:50.246 "io_timeout": 0, 00:28:50.246 "avg_latency_us": 84608.58529325969, 00:28:50.246 "min_latency_us": 7718.684444444444, 00:28:50.246 "max_latency_us": 60196.02962962963 00:28:50.246 } 00:28:50.246 ], 00:28:50.246 "core_count": 1 00:28:50.246 } 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:28:50.505 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:50.506 nvmf_trace.0 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1770701 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1770701 ']' 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1770701 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770701 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770701' 00:28:50.506 killing process with pid 1770701 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1770701 00:28:50.506 Received shutdown signal, test time was about 1.000000 seconds 00:28:50.506 00:28:50.506 Latency(us) 00:28:50.506 [2024-10-08T18:56:19.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.506 [2024-10-08T18:56:19.269Z] =================================================================================================================== 00:28:50.506 [2024-10-08T18:56:19.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.506 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1770701 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.072 rmmod nvme_tcp 00:28:51.072 rmmod nvme_fabrics 00:28:51.072 rmmod nvme_keyring 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1770616 ']' 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1770616 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1770616 ']' 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1770616 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770616 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770616' 00:28:51.072 killing process with pid 1770616 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1770616 00:28:51.072 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1770616 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.641 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.579 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.579 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UVrLwgOaQA /tmp/tmp.7UsTzPuVZ9 /tmp/tmp.N0kPLRkMVb 00:28:53.579 00:28:53.579 real 1m55.814s 00:28:53.579 user 3m24.500s 00:28:53.579 sys 0m32.427s 00:28:53.579 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.579 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:53.579 ************************************ 00:28:53.579 END TEST nvmf_tls 00:28:53.579 ************************************ 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:53.838 ************************************ 00:28:53.838 START TEST nvmf_fips 00:28:53.838 ************************************ 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:53.838 * Looking for test storage... 00:28:53.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:28:53.838 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.098 --rc genhtml_branch_coverage=1 00:28:54.098 --rc genhtml_function_coverage=1 00:28:54.098 --rc genhtml_legend=1 00:28:54.098 --rc geninfo_all_blocks=1 00:28:54.098 --rc geninfo_unexecuted_blocks=1 00:28:54.098 00:28:54.098 ' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.098 --rc genhtml_branch_coverage=1 00:28:54.098 --rc genhtml_function_coverage=1 00:28:54.098 --rc genhtml_legend=1 00:28:54.098 --rc geninfo_all_blocks=1 00:28:54.098 --rc geninfo_unexecuted_blocks=1 00:28:54.098 00:28:54.098 ' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.098 --rc genhtml_branch_coverage=1 00:28:54.098 --rc genhtml_function_coverage=1 00:28:54.098 --rc genhtml_legend=1 00:28:54.098 --rc geninfo_all_blocks=1 00:28:54.098 --rc geninfo_unexecuted_blocks=1 00:28:54.098 00:28:54.098 ' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.098 --rc genhtml_branch_coverage=1 00:28:54.098 --rc genhtml_function_coverage=1 00:28:54.098 --rc genhtml_legend=1 00:28:54.098 --rc geninfo_all_blocks=1 00:28:54.098 --rc geninfo_unexecuted_blocks=1 00:28:54.098 00:28:54.098 ' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:28:54.098 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:54.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:28:54.099 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:28:54.358 Error setting digest 00:28:54.358 4092F248A77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:28:54.358 4092F248A77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.358 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:57.648 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:57.648 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.648 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:57.649 Found net devices under 0000:84:00.0: cvl_0_0 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:57.649 Found net devices under 0000:84:00.1: cvl_0_1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:28:57.649 00:28:57.649 --- 10.0.0.2 ping statistics --- 00:28:57.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.649 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:28:57.649 00:28:57.649 --- 10.0.0.1 ping statistics --- 00:28:57.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.649 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.649 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1773335 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1773335 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1773335 ']' 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.649 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:57.649 [2024-10-08 20:56:26.189367] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:57.649 [2024-10-08 20:56:26.189551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.649 [2024-10-08 20:56:26.345522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.909 [2024-10-08 20:56:26.538232] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.909 [2024-10-08 20:56:26.538354] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.909 [2024-10-08 20:56:26.538392] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.909 [2024-10-08 20:56:26.538423] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.909 [2024-10-08 20:56:26.538449] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.909 [2024-10-08 20:56:26.539811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.bdS 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.bdS 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.bdS 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.bdS 00:28:58.169 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:58.430 [2024-10-08 20:56:27.116333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.430 [2024-10-08 20:56:27.132406] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:58.430 [2024-10-08 20:56:27.132850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.689 malloc0 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1773489 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1773489 /var/tmp/bdevperf.sock 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1773489 ']' 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.689 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:58.689 [2024-10-08 20:56:27.316921] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:58.689 [2024-10-08 20:56:27.317027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773489 ] 00:28:58.690 [2024-10-08 20:56:27.413498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.949 [2024-10-08 20:56:27.615755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.209 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.209 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:28:59.210 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.bdS 00:28:59.469 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:00.047 [2024-10-08 20:56:28.727533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:00.308 TLSTESTn1 00:29:00.308 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:00.567 Running I/O for 10 seconds... 00:29:02.443 1550.00 IOPS, 6.05 MiB/s [2024-10-08T18:56:32.141Z] 1545.00 IOPS, 6.04 MiB/s [2024-10-08T18:56:33.522Z] 2007.67 IOPS, 7.84 MiB/s [2024-10-08T18:56:34.461Z] 1879.25 IOPS, 7.34 MiB/s [2024-10-08T18:56:35.401Z] 1800.20 IOPS, 7.03 MiB/s [2024-10-08T18:56:36.340Z] 1740.00 IOPS, 6.80 MiB/s [2024-10-08T18:56:37.279Z] 1704.43 IOPS, 6.66 MiB/s [2024-10-08T18:56:38.215Z] 1674.62 IOPS, 6.54 MiB/s [2024-10-08T18:56:39.152Z] 1715.89 IOPS, 6.70 MiB/s [2024-10-08T18:56:39.409Z] 1759.90 IOPS, 6.87 MiB/s 00:29:10.646 Latency(us) 00:29:10.646 [2024-10-08T18:56:39.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.646 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:10.646 Verification LBA range: start 0x0 length 0x2000 00:29:10.646 TLSTESTn1 : 10.05 1763.82 6.89 0.00 0.00 72364.63 13398.47 61749.48 00:29:10.646 [2024-10-08T18:56:39.409Z] =================================================================================================================== 00:29:10.646 [2024-10-08T18:56:39.409Z] Total : 1763.82 6.89 0.00 0.00 72364.63 13398.47 61749.48 00:29:10.646 { 00:29:10.646 "results": [ 00:29:10.646 { 00:29:10.646 "job": "TLSTESTn1", 00:29:10.646 "core_mask": "0x4", 00:29:10.646 "workload": "verify", 00:29:10.646 "status": "finished", 00:29:10.646 "verify_range": { 00:29:10.646 "start": 0, 00:29:10.646 "length": 8192 00:29:10.646 }, 00:29:10.646 "queue_depth": 128, 00:29:10.646 "io_size": 4096, 00:29:10.646 "runtime": 10.048642, 00:29:10.646 "iops": 1763.820424690222, 00:29:10.646 "mibps": 6.889923533946179, 00:29:10.646 "io_failed": 0, 00:29:10.646 "io_timeout": 0, 00:29:10.646 "avg_latency_us": 72364.63460367612, 00:29:10.646 "min_latency_us": 13398.471111111112, 00:29:10.646 "max_latency_us": 61749.47555555555 00:29:10.646 } 00:29:10.646 ], 00:29:10.646 "core_count": 1 00:29:10.646 } 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:10.646 nvmf_trace.0 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1773489 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1773489 ']' 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1773489 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773489 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773489' 00:29:10.646 killing process with pid 1773489 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1773489 00:29:10.646 Received shutdown signal, test time was about 10.000000 seconds 00:29:10.646 00:29:10.646 Latency(us) 00:29:10.646 [2024-10-08T18:56:39.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.646 [2024-10-08T18:56:39.409Z] =================================================================================================================== 00:29:10.646 [2024-10-08T18:56:39.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.646 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1773489 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.213 rmmod nvme_tcp 00:29:11.213 rmmod nvme_fabrics 00:29:11.213 rmmod nvme_keyring 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1773335 ']' 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1773335 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1773335 ']' 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1773335 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773335 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773335' 00:29:11.213 killing process with pid 1773335 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1773335 00:29:11.213 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1773335 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.780 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.bdS 00:29:13.688 00:29:13.688 real 0m19.917s 00:29:13.688 user 0m26.142s 00:29:13.688 sys 0m7.041s 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:29:13.688 ************************************ 00:29:13.688 END TEST nvmf_fips 00:29:13.688 ************************************ 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:13.688 ************************************ 00:29:13.688 START TEST nvmf_control_msg_list 00:29:13.688 ************************************ 00:29:13.688 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:29:13.948 * Looking for test storage... 00:29:13.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.948 --rc genhtml_branch_coverage=1 00:29:13.948 --rc genhtml_function_coverage=1 00:29:13.948 --rc genhtml_legend=1 00:29:13.948 --rc geninfo_all_blocks=1 00:29:13.948 --rc geninfo_unexecuted_blocks=1 00:29:13.948 00:29:13.948 ' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.948 --rc genhtml_branch_coverage=1 00:29:13.948 --rc genhtml_function_coverage=1 00:29:13.948 --rc genhtml_legend=1 00:29:13.948 --rc geninfo_all_blocks=1 00:29:13.948 --rc geninfo_unexecuted_blocks=1 00:29:13.948 00:29:13.948 ' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.948 --rc genhtml_branch_coverage=1 00:29:13.948 --rc genhtml_function_coverage=1 00:29:13.948 --rc genhtml_legend=1 00:29:13.948 --rc geninfo_all_blocks=1 00:29:13.948 --rc geninfo_unexecuted_blocks=1 00:29:13.948 00:29:13.948 ' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.948 --rc genhtml_branch_coverage=1 00:29:13.948 --rc genhtml_function_coverage=1 00:29:13.948 --rc genhtml_legend=1 00:29:13.948 --rc geninfo_all_blocks=1 00:29:13.948 --rc geninfo_unexecuted_blocks=1 00:29:13.948 00:29:13.948 ' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.948 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.208 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.209 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.501 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:17.502 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:17.502 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:17.502 Found net devices under 0000:84:00.0: cvl_0_0 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:17.502 Found net devices under 0000:84:00.1: cvl_0_1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:29:17.502 00:29:17.502 --- 10.0.0.2 ping statistics --- 00:29:17.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.502 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:29:17.502 00:29:17.502 --- 10.0.0.1 ping statistics --- 00:29:17.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.502 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1776904 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1776904 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1776904 ']' 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.502 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:17.502 [2024-10-08 20:56:46.006477] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:17.502 [2024-10-08 20:56:46.006673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.502 [2024-10-08 20:56:46.157471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.763 [2024-10-08 20:56:46.370777] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.763 [2024-10-08 20:56:46.370903] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.763 [2024-10-08 20:56:46.370941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.763 [2024-10-08 20:56:46.370971] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.763 [2024-10-08 20:56:46.370997] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.763 [2024-10-08 20:56:46.371959] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 [2024-10-08 20:56:46.629809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 Malloc0 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:18.023 [2024-10-08 20:56:46.696628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1777045 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1777046 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1777047 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.023 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1777045 00:29:18.282 [2024-10-08 20:56:46.817569] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:18.282 [2024-10-08 20:56:46.817934] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:18.282 [2024-10-08 20:56:46.818234] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:19.215 Initializing NVMe Controllers 00:29:19.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:29:19.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:29:19.215 Initialization complete. Launching workers. 00:29:19.215 ======================================================== 00:29:19.215 Latency(us) 00:29:19.215 Device Information : IOPS MiB/s Average min max 00:29:19.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3828.00 14.95 260.59 172.71 909.99 00:29:19.215 ======================================================== 00:29:19.215 Total : 3828.00 14.95 260.59 172.71 909.99 00:29:19.215 00:29:19.215 Initializing NVMe Controllers 00:29:19.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:29:19.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:29:19.215 Initialization complete. Launching workers. 00:29:19.215 ======================================================== 00:29:19.215 Latency(us) 00:29:19.215 Device Information : IOPS MiB/s Average min max 00:29:19.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 28.00 0.11 36611.97 254.51 41882.26 00:29:19.215 ======================================================== 00:29:19.215 Total : 28.00 0.11 36611.97 254.51 41882.26 00:29:19.215 00:29:19.215 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1777046 00:29:19.473 Initializing NVMe Controllers 00:29:19.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:29:19.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:29:19.473 Initialization complete. Launching workers. 00:29:19.473 ======================================================== 00:29:19.473 Latency(us) 00:29:19.473 Device Information : IOPS MiB/s Average min max 00:29:19.473 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40956.08 40258.65 41898.16 00:29:19.473 ======================================================== 00:29:19.473 Total : 25.00 0.10 40956.08 40258.65 41898.16 00:29:19.473 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1777047 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.473 rmmod nvme_tcp 00:29:19.473 rmmod nvme_fabrics 00:29:19.473 rmmod nvme_keyring 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1776904 ']' 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1776904 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1776904 ']' 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1776904 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:29:19.473 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1776904 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1776904' 00:29:19.474 killing process with pid 1776904 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1776904 00:29:19.474 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1776904 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.042 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.947 00:29:21.947 real 0m8.250s 00:29:21.947 user 0m6.865s 00:29:21.947 sys 0m3.846s 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 ************************************ 00:29:21.947 END TEST nvmf_control_msg_list 00:29:21.947 ************************************ 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 ************************************ 00:29:21.947 START TEST nvmf_wait_for_buf 00:29:21.947 ************************************ 00:29:21.947 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:29:22.207 * Looking for test storage... 00:29:22.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.207 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:22.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.208 --rc genhtml_branch_coverage=1 00:29:22.208 --rc genhtml_function_coverage=1 00:29:22.208 --rc genhtml_legend=1 00:29:22.208 --rc geninfo_all_blocks=1 00:29:22.208 --rc geninfo_unexecuted_blocks=1 00:29:22.208 00:29:22.208 ' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:22.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.208 --rc genhtml_branch_coverage=1 00:29:22.208 --rc genhtml_function_coverage=1 00:29:22.208 --rc genhtml_legend=1 00:29:22.208 --rc geninfo_all_blocks=1 00:29:22.208 --rc geninfo_unexecuted_blocks=1 00:29:22.208 00:29:22.208 ' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:22.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.208 --rc genhtml_branch_coverage=1 00:29:22.208 --rc genhtml_function_coverage=1 00:29:22.208 --rc genhtml_legend=1 00:29:22.208 --rc geninfo_all_blocks=1 00:29:22.208 --rc geninfo_unexecuted_blocks=1 00:29:22.208 00:29:22.208 ' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:22.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.208 --rc genhtml_branch_coverage=1 00:29:22.208 --rc genhtml_function_coverage=1 00:29:22.208 --rc genhtml_legend=1 00:29:22.208 --rc geninfo_all_blocks=1 00:29:22.208 --rc geninfo_unexecuted_blocks=1 00:29:22.208 00:29:22.208 ' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:22.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.208 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:25.505 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.505 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:25.506 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:25.506 Found net devices under 0000:84:00.0: cvl_0_0 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:25.506 Found net devices under 0000:84:00.1: cvl_0_1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:29:25.506 00:29:25.506 --- 10.0.0.2 ping statistics --- 00:29:25.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.506 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:25.506 00:29:25.506 --- 10.0.0.1 ping statistics --- 00:29:25.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.506 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1779265 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1779265 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1779265 ']' 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.506 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:25.506 [2024-10-08 20:56:54.062623] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:25.506 [2024-10-08 20:56:54.062721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.506 [2024-10-08 20:56:54.201234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.765 [2024-10-08 20:56:54.415256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.765 [2024-10-08 20:56:54.415370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.765 [2024-10-08 20:56:54.415408] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.765 [2024-10-08 20:56:54.415437] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.765 [2024-10-08 20:56:54.415463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.765 [2024-10-08 20:56:54.416815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.026 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 Malloc0 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 [2024-10-08 20:56:54.894977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:26.287 [2024-10-08 20:56:54.927284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.287 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.287 [2024-10-08 20:56:55.040920] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:28.197 Initializing NVMe Controllers 00:29:28.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:29:28.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:29:28.197 Initialization complete. Launching workers. 00:29:28.197 ======================================================== 00:29:28.197 Latency(us) 00:29:28.197 Device Information : IOPS MiB/s Average min max 00:29:28.197 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 35.00 4.37 117271.86 46545.54 191517.47 00:29:28.197 ======================================================== 00:29:28.197 Total : 35.00 4.37 117271.86 46545.54 191517.47 00:29:28.197 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=534 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 534 -eq 0 ]] 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.197 rmmod nvme_tcp 00:29:28.197 rmmod nvme_fabrics 00:29:28.197 rmmod nvme_keyring 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1779265 ']' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1779265 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1779265 ']' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1779265 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1779265 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1779265' 00:29:28.197 killing process with pid 1779265 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1779265 00:29:28.197 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1779265 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.455 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.361 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.361 00:29:30.361 real 0m8.401s 00:29:30.361 user 0m4.342s 00:29:30.361 sys 0m3.006s 00:29:30.361 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.361 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:29:30.361 ************************************ 00:29:30.361 END TEST nvmf_wait_for_buf 00:29:30.361 ************************************ 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.619 20:56:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:33.227 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:33.228 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:33.228 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:33.228 Found net devices under 0000:84:00.0: cvl_0_0 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:33.228 Found net devices under 0000:84:00.1: cvl_0_1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:33.228 ************************************ 00:29:33.228 START TEST nvmf_perf_adq 00:29:33.228 ************************************ 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:33.228 * Looking for test storage... 00:29:33.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:33.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.228 --rc genhtml_branch_coverage=1 00:29:33.228 --rc genhtml_function_coverage=1 00:29:33.228 --rc genhtml_legend=1 00:29:33.228 --rc geninfo_all_blocks=1 00:29:33.228 --rc geninfo_unexecuted_blocks=1 00:29:33.228 00:29:33.228 ' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:33.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.228 --rc genhtml_branch_coverage=1 00:29:33.228 --rc genhtml_function_coverage=1 00:29:33.228 --rc genhtml_legend=1 00:29:33.228 --rc geninfo_all_blocks=1 00:29:33.228 --rc geninfo_unexecuted_blocks=1 00:29:33.228 00:29:33.228 ' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:33.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.228 --rc genhtml_branch_coverage=1 00:29:33.228 --rc genhtml_function_coverage=1 00:29:33.228 --rc genhtml_legend=1 00:29:33.228 --rc geninfo_all_blocks=1 00:29:33.228 --rc geninfo_unexecuted_blocks=1 00:29:33.228 00:29:33.228 ' 00:29:33.228 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:33.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.228 --rc genhtml_branch_coverage=1 00:29:33.228 --rc genhtml_function_coverage=1 00:29:33.229 --rc genhtml_legend=1 00:29:33.229 --rc geninfo_all_blocks=1 00:29:33.229 --rc geninfo_unexecuted_blocks=1 00:29:33.229 00:29:33.229 ' 00:29:33.229 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.229 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:33.489 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.489 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.489 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.490 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:33.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.490 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:36.778 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.778 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:36.779 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:36.779 Found net devices under 0000:84:00.0: cvl_0_0 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:36.779 Found net devices under 0000:84:00.1: cvl_0_1 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:36.779 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:37.039 20:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:39.576 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.857 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:44.858 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:44.858 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:44.858 Found net devices under 0000:84:00.0: cvl_0_0 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:44.858 Found net devices under 0000:84:00.1: cvl_0_1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:29:44.858 00:29:44.858 --- 10.0.0.2 ping statistics --- 00:29:44.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.858 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:29:44.858 00:29:44.858 --- 10.0.0.1 ping statistics --- 00:29:44.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.858 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:44.858 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1784890 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1784890 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1784890 ']' 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.858 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.858 [2024-10-08 20:57:13.135554] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:44.858 [2024-10-08 20:57:13.135735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.858 [2024-10-08 20:57:13.296415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.858 [2024-10-08 20:57:13.514689] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.858 [2024-10-08 20:57:13.514796] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.859 [2024-10-08 20:57:13.514852] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.859 [2024-10-08 20:57:13.514914] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.859 [2024-10-08 20:57:13.514955] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.859 [2024-10-08 20:57:13.518824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.859 [2024-10-08 20:57:13.518891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.859 [2024-10-08 20:57:13.518986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.859 [2024-10-08 20:57:13.518990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.423 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.423 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:45.423 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:45.423 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:45.423 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.682 [2024-10-08 20:57:14.417583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.682 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.941 Malloc1 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:45.941 [2024-10-08 20:57:14.471265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1785053 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:45.941 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:47.840 "tick_rate": 2700000000, 00:29:47.840 "poll_groups": [ 00:29:47.840 { 00:29:47.840 "name": "nvmf_tgt_poll_group_000", 00:29:47.840 "admin_qpairs": 1, 00:29:47.840 "io_qpairs": 1, 00:29:47.840 "current_admin_qpairs": 1, 00:29:47.840 "current_io_qpairs": 1, 00:29:47.840 "pending_bdev_io": 0, 00:29:47.840 "completed_nvme_io": 19364, 00:29:47.840 "transports": [ 00:29:47.840 { 00:29:47.840 "trtype": "TCP" 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": "nvmf_tgt_poll_group_001", 00:29:47.840 "admin_qpairs": 0, 00:29:47.840 "io_qpairs": 1, 00:29:47.840 "current_admin_qpairs": 0, 00:29:47.840 "current_io_qpairs": 1, 00:29:47.840 "pending_bdev_io": 0, 00:29:47.840 "completed_nvme_io": 19375, 00:29:47.840 "transports": [ 00:29:47.840 { 00:29:47.840 "trtype": "TCP" 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": "nvmf_tgt_poll_group_002", 00:29:47.840 "admin_qpairs": 0, 00:29:47.840 "io_qpairs": 1, 00:29:47.840 "current_admin_qpairs": 0, 00:29:47.840 "current_io_qpairs": 1, 00:29:47.840 "pending_bdev_io": 0, 00:29:47.840 "completed_nvme_io": 19646, 00:29:47.840 "transports": [ 00:29:47.840 { 00:29:47.840 "trtype": "TCP" 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": "nvmf_tgt_poll_group_003", 00:29:47.840 "admin_qpairs": 0, 00:29:47.840 "io_qpairs": 1, 00:29:47.840 "current_admin_qpairs": 0, 00:29:47.840 "current_io_qpairs": 1, 00:29:47.840 "pending_bdev_io": 0, 00:29:47.840 "completed_nvme_io": 19165, 00:29:47.840 "transports": [ 00:29:47.840 { 00:29:47.840 "trtype": "TCP" 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 }' 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:47.840 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1785053 00:29:55.956 Initializing NVMe Controllers 00:29:55.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:55.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:55.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:55.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:55.956 Initialization complete. Launching workers. 00:29:55.956 ======================================================== 00:29:55.956 Latency(us) 00:29:55.956 Device Information : IOPS MiB/s Average min max 00:29:55.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10116.00 39.52 6326.40 2369.38 10515.85 00:29:55.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10326.00 40.34 6198.03 2377.48 10212.73 00:29:55.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10378.70 40.54 6167.47 2222.00 10326.01 00:29:55.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10231.00 39.96 6256.21 2278.60 10697.08 00:29:55.956 ======================================================== 00:29:55.956 Total : 41051.70 160.36 6236.44 2222.00 10697.08 00:29:55.956 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.956 rmmod nvme_tcp 00:29:55.956 rmmod nvme_fabrics 00:29:55.956 rmmod nvme_keyring 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1784890 ']' 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1784890 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1784890 ']' 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1784890 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:55.956 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1784890 00:29:56.216 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.216 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.216 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1784890' 00:29:56.216 killing process with pid 1784890 00:29:56.216 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1784890 00:29:56.216 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1784890 00:29:56.476 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:56.476 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:56.476 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:56.476 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:56.476 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.477 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.019 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.019 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:59.019 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:59.019 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:59.279 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:01.818 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:07.098 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:07.098 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:07.098 Found net devices under 0000:84:00.0: cvl_0_0 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:07.098 Found net devices under 0000:84:00.1: cvl_0_1 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.098 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:30:07.099 00:30:07.099 --- 10.0.0.2 ping statistics --- 00:30:07.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.099 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:30:07.099 00:30:07.099 --- 10.0.0.1 ping statistics --- 00:30:07.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.099 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:07.099 net.core.busy_poll = 1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:07.099 net.core.busy_read = 1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1787654 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1787654 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1787654 ']' 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.099 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:07.099 [2024-10-08 20:57:35.497957] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:07.099 [2024-10-08 20:57:35.498064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.099 [2024-10-08 20:57:35.617453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.099 [2024-10-08 20:57:35.838147] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.099 [2024-10-08 20:57:35.838264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.099 [2024-10-08 20:57:35.838300] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.099 [2024-10-08 20:57:35.838340] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.099 [2024-10-08 20:57:35.838366] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.099 [2024-10-08 20:57:35.842007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.099 [2024-10-08 20:57:35.842108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.099 [2024-10-08 20:57:35.842204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.099 [2024-10-08 20:57:35.842207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.032 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.290 [2024-10-08 20:57:36.803906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.290 Malloc1 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.290 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.291 [2024-10-08 20:57:36.857559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1787815 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:30:08.291 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:10.189 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:30:10.189 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.189 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:10.189 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.189 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:30:10.189 "tick_rate": 2700000000, 00:30:10.189 "poll_groups": [ 00:30:10.189 { 00:30:10.190 "name": "nvmf_tgt_poll_group_000", 00:30:10.190 "admin_qpairs": 1, 00:30:10.190 "io_qpairs": 2, 00:30:10.190 "current_admin_qpairs": 1, 00:30:10.190 "current_io_qpairs": 2, 00:30:10.190 "pending_bdev_io": 0, 00:30:10.190 "completed_nvme_io": 24657, 00:30:10.190 "transports": [ 00:30:10.190 { 00:30:10.190 "trtype": "TCP" 00:30:10.190 } 00:30:10.190 ] 00:30:10.190 }, 00:30:10.190 { 00:30:10.190 "name": "nvmf_tgt_poll_group_001", 00:30:10.190 "admin_qpairs": 0, 00:30:10.190 "io_qpairs": 2, 00:30:10.190 "current_admin_qpairs": 0, 00:30:10.190 "current_io_qpairs": 2, 00:30:10.190 "pending_bdev_io": 0, 00:30:10.190 "completed_nvme_io": 24949, 00:30:10.190 "transports": [ 00:30:10.190 { 00:30:10.190 "trtype": "TCP" 00:30:10.190 } 00:30:10.190 ] 00:30:10.190 }, 00:30:10.190 { 00:30:10.190 "name": "nvmf_tgt_poll_group_002", 00:30:10.190 "admin_qpairs": 0, 00:30:10.190 "io_qpairs": 0, 00:30:10.190 "current_admin_qpairs": 0, 00:30:10.190 "current_io_qpairs": 0, 00:30:10.190 "pending_bdev_io": 0, 00:30:10.190 "completed_nvme_io": 0, 00:30:10.190 "transports": [ 00:30:10.190 { 00:30:10.190 "trtype": "TCP" 00:30:10.190 } 00:30:10.190 ] 00:30:10.190 }, 00:30:10.190 { 00:30:10.190 "name": "nvmf_tgt_poll_group_003", 00:30:10.190 "admin_qpairs": 0, 00:30:10.190 "io_qpairs": 0, 00:30:10.190 "current_admin_qpairs": 0, 00:30:10.190 "current_io_qpairs": 0, 00:30:10.190 "pending_bdev_io": 0, 00:30:10.190 "completed_nvme_io": 0, 00:30:10.190 "transports": [ 00:30:10.190 { 00:30:10.190 "trtype": "TCP" 00:30:10.190 } 00:30:10.190 ] 00:30:10.190 } 00:30:10.190 ] 00:30:10.190 }' 00:30:10.190 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:10.190 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:30:10.190 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:30:10.190 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:30:10.190 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1787815 00:30:18.298 Initializing NVMe Controllers 00:30:18.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:18.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:18.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:18.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:18.298 Initialization complete. Launching workers. 00:30:18.298 ======================================================== 00:30:18.298 Latency(us) 00:30:18.298 Device Information : IOPS MiB/s Average min max 00:30:18.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6920.46 27.03 9250.42 1708.01 54912.40 00:30:18.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6134.27 23.96 10443.80 1925.09 54683.70 00:30:18.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7221.85 28.21 8861.68 1654.60 54851.92 00:30:18.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6065.37 23.69 10555.32 1702.99 56406.72 00:30:18.298 ======================================================== 00:30:18.298 Total : 26341.95 102.90 9722.21 1654.60 56406.72 00:30:18.298 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.298 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.298 rmmod nvme_tcp 00:30:18.298 rmmod nvme_fabrics 00:30:18.298 rmmod nvme_keyring 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1787654 ']' 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1787654 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1787654 ']' 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1787654 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1787654 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1787654' 00:30:18.558 killing process with pid 1787654 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1787654 00:30:18.558 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1787654 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.817 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:21.358 00:30:21.358 real 0m47.875s 00:30:21.358 user 2m47.966s 00:30:21.358 sys 0m10.868s 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:21.358 ************************************ 00:30:21.358 END TEST nvmf_perf_adq 00:30:21.358 ************************************ 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:21.358 ************************************ 00:30:21.358 START TEST nvmf_shutdown 00:30:21.358 ************************************ 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:21.358 * Looking for test storage... 00:30:21.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:21.358 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.359 --rc genhtml_branch_coverage=1 00:30:21.359 --rc genhtml_function_coverage=1 00:30:21.359 --rc genhtml_legend=1 00:30:21.359 --rc geninfo_all_blocks=1 00:30:21.359 --rc geninfo_unexecuted_blocks=1 00:30:21.359 00:30:21.359 ' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.359 --rc genhtml_branch_coverage=1 00:30:21.359 --rc genhtml_function_coverage=1 00:30:21.359 --rc genhtml_legend=1 00:30:21.359 --rc geninfo_all_blocks=1 00:30:21.359 --rc geninfo_unexecuted_blocks=1 00:30:21.359 00:30:21.359 ' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.359 --rc genhtml_branch_coverage=1 00:30:21.359 --rc genhtml_function_coverage=1 00:30:21.359 --rc genhtml_legend=1 00:30:21.359 --rc geninfo_all_blocks=1 00:30:21.359 --rc geninfo_unexecuted_blocks=1 00:30:21.359 00:30:21.359 ' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.359 --rc genhtml_branch_coverage=1 00:30:21.359 --rc genhtml_function_coverage=1 00:30:21.359 --rc genhtml_legend=1 00:30:21.359 --rc geninfo_all_blocks=1 00:30:21.359 --rc geninfo_unexecuted_blocks=1 00:30:21.359 00:30:21.359 ' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:21.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:21.359 ************************************ 00:30:21.359 START TEST nvmf_shutdown_tc1 00:30:21.359 ************************************ 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:21.359 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.360 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:24.692 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.692 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:24.693 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:24.693 Found net devices under 0000:84:00.0: cvl_0_0 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:24.693 Found net devices under 0000:84:00.1: cvl_0_1 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.693 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:24.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:30:24.693 00:30:24.693 --- 10.0.0.2 ping statistics --- 00:30:24.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.693 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:30:24.693 00:30:24.693 --- 10.0.0.1 ping statistics --- 00:30:24.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.693 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1791131 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1791131 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1791131 ']' 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:24.693 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.693 [2024-10-08 20:57:53.185460] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:24.693 [2024-10-08 20:57:53.185561] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.693 [2024-10-08 20:57:53.294205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.952 [2024-10-08 20:57:53.517309] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.952 [2024-10-08 20:57:53.517424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.952 [2024-10-08 20:57:53.517460] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.952 [2024-10-08 20:57:53.517489] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.952 [2024-10-08 20:57:53.517516] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.952 [2024-10-08 20:57:53.521226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.952 [2024-10-08 20:57:53.521324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.952 [2024-10-08 20:57:53.521377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.952 [2024-10-08 20:57:53.521380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.952 [2024-10-08 20:57:53.698476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:24.952 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.211 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:25.211 Malloc1 00:30:25.211 [2024-10-08 20:57:53.792870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.211 Malloc2 00:30:25.211 Malloc3 00:30:25.211 Malloc4 00:30:25.211 Malloc5 00:30:25.469 Malloc6 00:30:25.469 Malloc7 00:30:25.469 Malloc8 00:30:25.469 Malloc9 00:30:25.469 Malloc10 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1791313 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1791313 /var/tmp/bdevperf.sock 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1791313 ']' 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:25.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.727 { 00:30:25.727 "params": { 00:30:25.727 "name": "Nvme$subsystem", 00:30:25.727 "trtype": "$TEST_TRANSPORT", 00:30:25.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.727 "adrfam": "ipv4", 00:30:25.727 "trsvcid": "$NVMF_PORT", 00:30:25.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.727 "hdgst": ${hdgst:-false}, 00:30:25.727 "ddgst": ${ddgst:-false} 00:30:25.727 }, 00:30:25.727 "method": "bdev_nvme_attach_controller" 00:30:25.727 } 00:30:25.727 EOF 00:30:25.727 )") 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.727 { 00:30:25.727 "params": { 00:30:25.727 "name": "Nvme$subsystem", 00:30:25.727 "trtype": "$TEST_TRANSPORT", 00:30:25.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.727 "adrfam": "ipv4", 00:30:25.727 "trsvcid": "$NVMF_PORT", 00:30:25.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.727 "hdgst": ${hdgst:-false}, 00:30:25.727 "ddgst": ${ddgst:-false} 00:30:25.727 }, 00:30:25.727 "method": "bdev_nvme_attach_controller" 00:30:25.727 } 00:30:25.727 EOF 00:30:25.727 )") 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.727 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.727 { 00:30:25.727 "params": { 00:30:25.727 "name": "Nvme$subsystem", 00:30:25.727 "trtype": "$TEST_TRANSPORT", 00:30:25.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.727 "adrfam": "ipv4", 00:30:25.727 "trsvcid": "$NVMF_PORT", 00:30:25.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.727 "hdgst": ${hdgst:-false}, 00:30:25.727 "ddgst": ${ddgst:-false} 00:30:25.727 }, 00:30:25.727 "method": "bdev_nvme_attach_controller" 00:30:25.727 } 00:30:25.727 EOF 00:30:25.727 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.728 { 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme$subsystem", 00:30:25.728 "trtype": "$TEST_TRANSPORT", 00:30:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "$NVMF_PORT", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.728 "hdgst": ${hdgst:-false}, 00:30:25.728 "ddgst": ${ddgst:-false} 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 } 00:30:25.728 EOF 00:30:25.728 )") 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:25.728 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme1", 00:30:25.728 "trtype": "tcp", 00:30:25.728 "traddr": "10.0.0.2", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "4420", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.728 "hdgst": false, 00:30:25.728 "ddgst": false 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 },{ 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme2", 00:30:25.728 "trtype": "tcp", 00:30:25.728 "traddr": "10.0.0.2", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "4420", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:25.728 "hdgst": false, 00:30:25.728 "ddgst": false 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 },{ 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme3", 00:30:25.728 "trtype": "tcp", 00:30:25.728 "traddr": "10.0.0.2", 00:30:25.728 "adrfam": "ipv4", 00:30:25.728 "trsvcid": "4420", 00:30:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:25.728 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:25.728 "hdgst": false, 00:30:25.728 "ddgst": false 00:30:25.728 }, 00:30:25.728 "method": "bdev_nvme_attach_controller" 00:30:25.728 },{ 00:30:25.728 "params": { 00:30:25.728 "name": "Nvme4", 00:30:25.728 "trtype": "tcp", 00:30:25.728 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme5", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme6", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme7", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme8", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme9", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 },{ 00:30:25.729 "params": { 00:30:25.729 "name": "Nvme10", 00:30:25.729 "trtype": "tcp", 00:30:25.729 "traddr": "10.0.0.2", 00:30:25.729 "adrfam": "ipv4", 00:30:25.729 "trsvcid": "4420", 00:30:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:25.729 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:25.729 "hdgst": false, 00:30:25.729 "ddgst": false 00:30:25.729 }, 00:30:25.729 "method": "bdev_nvme_attach_controller" 00:30:25.729 }' 00:30:25.729 [2024-10-08 20:57:54.324764] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:25.729 [2024-10-08 20:57:54.324854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:25.729 [2024-10-08 20:57:54.394746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.986 [2024-10-08 20:57:54.507521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1791313 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:27.883 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:28.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1791313 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1791131 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.814 { 00:30:28.814 "params": { 00:30:28.814 "name": "Nvme$subsystem", 00:30:28.814 "trtype": "$TEST_TRANSPORT", 00:30:28.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.814 "adrfam": "ipv4", 00:30:28.814 "trsvcid": "$NVMF_PORT", 00:30:28.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.814 "hdgst": ${hdgst:-false}, 00:30:28.814 "ddgst": ${ddgst:-false} 00:30:28.814 }, 00:30:28.814 "method": "bdev_nvme_attach_controller" 00:30:28.814 } 00:30:28.814 EOF 00:30:28.814 )") 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.814 { 00:30:28.814 "params": { 00:30:28.814 "name": "Nvme$subsystem", 00:30:28.814 "trtype": "$TEST_TRANSPORT", 00:30:28.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.814 "adrfam": "ipv4", 00:30:28.814 "trsvcid": "$NVMF_PORT", 00:30:28.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.814 "hdgst": ${hdgst:-false}, 00:30:28.814 "ddgst": ${ddgst:-false} 00:30:28.814 }, 00:30:28.814 "method": "bdev_nvme_attach_controller" 00:30:28.814 } 00:30:28.814 EOF 00:30:28.814 )") 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.814 { 00:30:28.814 "params": { 00:30:28.814 "name": "Nvme$subsystem", 00:30:28.814 "trtype": "$TEST_TRANSPORT", 00:30:28.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.814 "adrfam": "ipv4", 00:30:28.814 "trsvcid": "$NVMF_PORT", 00:30:28.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.814 "hdgst": ${hdgst:-false}, 00:30:28.814 "ddgst": ${ddgst:-false} 00:30:28.814 }, 00:30:28.814 "method": "bdev_nvme_attach_controller" 00:30:28.814 } 00:30:28.814 EOF 00:30:28.814 )") 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.814 { 00:30:28.814 "params": { 00:30:28.814 "name": "Nvme$subsystem", 00:30:28.814 "trtype": "$TEST_TRANSPORT", 00:30:28.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.814 "adrfam": "ipv4", 00:30:28.814 "trsvcid": "$NVMF_PORT", 00:30:28.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.814 "hdgst": ${hdgst:-false}, 00:30:28.814 "ddgst": ${ddgst:-false} 00:30:28.814 }, 00:30:28.814 "method": "bdev_nvme_attach_controller" 00:30:28.814 } 00:30:28.814 EOF 00:30:28.814 )") 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.814 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.814 { 00:30:28.814 "params": { 00:30:28.814 "name": "Nvme$subsystem", 00:30:28.814 "trtype": "$TEST_TRANSPORT", 00:30:28.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.814 "adrfam": "ipv4", 00:30:28.814 "trsvcid": "$NVMF_PORT", 00:30:28.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.814 "hdgst": ${hdgst:-false}, 00:30:28.814 "ddgst": ${ddgst:-false} 00:30:28.814 }, 00:30:28.814 "method": "bdev_nvme_attach_controller" 00:30:28.814 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.815 { 00:30:28.815 "params": { 00:30:28.815 "name": "Nvme$subsystem", 00:30:28.815 "trtype": "$TEST_TRANSPORT", 00:30:28.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.815 "adrfam": "ipv4", 00:30:28.815 "trsvcid": "$NVMF_PORT", 00:30:28.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.815 "hdgst": ${hdgst:-false}, 00:30:28.815 "ddgst": ${ddgst:-false} 00:30:28.815 }, 00:30:28.815 "method": "bdev_nvme_attach_controller" 00:30:28.815 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.815 { 00:30:28.815 "params": { 00:30:28.815 "name": "Nvme$subsystem", 00:30:28.815 "trtype": "$TEST_TRANSPORT", 00:30:28.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.815 "adrfam": "ipv4", 00:30:28.815 "trsvcid": "$NVMF_PORT", 00:30:28.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.815 "hdgst": ${hdgst:-false}, 00:30:28.815 "ddgst": ${ddgst:-false} 00:30:28.815 }, 00:30:28.815 "method": "bdev_nvme_attach_controller" 00:30:28.815 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.815 { 00:30:28.815 "params": { 00:30:28.815 "name": "Nvme$subsystem", 00:30:28.815 "trtype": "$TEST_TRANSPORT", 00:30:28.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.815 "adrfam": "ipv4", 00:30:28.815 "trsvcid": "$NVMF_PORT", 00:30:28.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.815 "hdgst": ${hdgst:-false}, 00:30:28.815 "ddgst": ${ddgst:-false} 00:30:28.815 }, 00:30:28.815 "method": "bdev_nvme_attach_controller" 00:30:28.815 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.815 { 00:30:28.815 "params": { 00:30:28.815 "name": "Nvme$subsystem", 00:30:28.815 "trtype": "$TEST_TRANSPORT", 00:30:28.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.815 "adrfam": "ipv4", 00:30:28.815 "trsvcid": "$NVMF_PORT", 00:30:28.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.815 "hdgst": ${hdgst:-false}, 00:30:28.815 "ddgst": ${ddgst:-false} 00:30:28.815 }, 00:30:28.815 "method": "bdev_nvme_attach_controller" 00:30:28.815 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.815 { 00:30:28.815 "params": { 00:30:28.815 "name": "Nvme$subsystem", 00:30:28.815 "trtype": "$TEST_TRANSPORT", 00:30:28.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.815 "adrfam": "ipv4", 00:30:28.815 "trsvcid": "$NVMF_PORT", 00:30:28.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.815 "hdgst": ${hdgst:-false}, 00:30:28.815 "ddgst": ${ddgst:-false} 00:30:28.815 }, 00:30:28.815 "method": "bdev_nvme_attach_controller" 00:30:28.815 } 00:30:28.815 EOF 00:30:28.815 )") 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:28.815 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:29.072 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:29.072 "params": { 00:30:29.072 "name": "Nvme1", 00:30:29.072 "trtype": "tcp", 00:30:29.072 "traddr": "10.0.0.2", 00:30:29.072 "adrfam": "ipv4", 00:30:29.072 "trsvcid": "4420", 00:30:29.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.072 "hdgst": false, 00:30:29.072 "ddgst": false 00:30:29.072 }, 00:30:29.072 "method": "bdev_nvme_attach_controller" 00:30:29.072 },{ 00:30:29.072 "params": { 00:30:29.072 "name": "Nvme2", 00:30:29.072 "trtype": "tcp", 00:30:29.072 "traddr": "10.0.0.2", 00:30:29.072 "adrfam": "ipv4", 00:30:29.072 "trsvcid": "4420", 00:30:29.072 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:29.072 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:29.072 "hdgst": false, 00:30:29.072 "ddgst": false 00:30:29.072 }, 00:30:29.072 "method": "bdev_nvme_attach_controller" 00:30:29.072 },{ 00:30:29.072 "params": { 00:30:29.072 "name": "Nvme3", 00:30:29.072 "trtype": "tcp", 00:30:29.072 "traddr": "10.0.0.2", 00:30:29.072 "adrfam": "ipv4", 00:30:29.072 "trsvcid": "4420", 00:30:29.072 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:29.072 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:29.072 "hdgst": false, 00:30:29.072 "ddgst": false 00:30:29.072 }, 00:30:29.072 "method": "bdev_nvme_attach_controller" 00:30:29.072 },{ 00:30:29.072 "params": { 00:30:29.072 "name": "Nvme4", 00:30:29.072 "trtype": "tcp", 00:30:29.072 "traddr": "10.0.0.2", 00:30:29.072 "adrfam": "ipv4", 00:30:29.072 "trsvcid": "4420", 00:30:29.072 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme5", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme6", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme7", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme8", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme9", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 },{ 00:30:29.073 "params": { 00:30:29.073 "name": "Nvme10", 00:30:29.073 "trtype": "tcp", 00:30:29.073 "traddr": "10.0.0.2", 00:30:29.073 "adrfam": "ipv4", 00:30:29.073 "trsvcid": "4420", 00:30:29.073 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:29.073 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:29.073 "hdgst": false, 00:30:29.073 "ddgst": false 00:30:29.073 }, 00:30:29.073 "method": "bdev_nvme_attach_controller" 00:30:29.073 }' 00:30:29.073 [2024-10-08 20:57:57.588444] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:29.073 [2024-10-08 20:57:57.588533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791728 ] 00:30:29.073 [2024-10-08 20:57:57.653827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.073 [2024-10-08 20:57:57.768817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.970 Running I/O for 1 seconds... 00:30:31.905 1677.00 IOPS, 104.81 MiB/s 00:30:31.905 Latency(us) 00:30:31.905 [2024-10-08T18:58:00.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.905 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme1n1 : 1.17 222.50 13.91 0.00 0.00 282395.28 5679.79 268746.15 00:30:31.905 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme2n1 : 1.18 221.18 13.82 0.00 0.00 277159.77 17476.27 260978.92 00:30:31.905 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme3n1 : 1.16 220.68 13.79 0.00 0.00 277253.12 17282.09 267192.70 00:30:31.905 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme4n1 : 1.15 222.02 13.88 0.00 0.00 270284.80 25049.32 276513.37 00:30:31.905 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme5n1 : 1.18 219.89 13.74 0.00 0.00 268263.49 3689.43 268746.15 00:30:31.905 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme6n1 : 1.19 214.78 13.42 0.00 0.00 270253.70 20583.16 290494.39 00:30:31.905 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme7n1 : 1.17 218.33 13.65 0.00 0.00 260648.58 31845.64 257872.02 00:30:31.905 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme8n1 : 1.19 215.77 13.49 0.00 0.00 259371.61 18155.90 268746.15 00:30:31.905 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme9n1 : 1.20 213.92 13.37 0.00 0.00 257439.67 19709.35 285834.05 00:30:31.905 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.905 Verification LBA range: start 0x0 length 0x400 00:30:31.905 Nvme10n1 : 1.20 213.18 13.32 0.00 0.00 253911.99 20194.80 292047.83 00:30:31.905 [2024-10-08T18:58:00.668Z] =================================================================================================================== 00:30:31.905 [2024-10-08T18:58:00.668Z] Total : 2182.26 136.39 0.00 0.00 267736.65 3689.43 292047.83 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.163 rmmod nvme_tcp 00:30:32.163 rmmod nvme_fabrics 00:30:32.163 rmmod nvme_keyring 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1791131 ']' 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1791131 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1791131 ']' 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1791131 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.163 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1791131 00:30:32.421 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:32.421 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:32.421 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1791131' 00:30:32.421 killing process with pid 1791131 00:30:32.421 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1791131 00:30:32.421 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1791131 00:30:32.991 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:32.991 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:32.991 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:32.991 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.992 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.899 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.899 00:30:34.899 real 0m13.663s 00:30:34.899 user 0m37.715s 00:30:34.899 sys 0m4.210s 00:30:34.899 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.899 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:34.899 ************************************ 00:30:34.899 END TEST nvmf_shutdown_tc1 00:30:34.899 ************************************ 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:35.160 ************************************ 00:30:35.160 START TEST nvmf_shutdown_tc2 00:30:35.160 ************************************ 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:35.160 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:35.160 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:35.160 Found net devices under 0000:84:00.0: cvl_0_0 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.160 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:35.161 Found net devices under 0000:84:00.1: cvl_0_1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:30:35.161 00:30:35.161 --- 10.0.0.2 ping statistics --- 00:30:35.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.161 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:35.161 00:30:35.161 --- 10.0.0.1 ping statistics --- 00:30:35.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.161 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:35.161 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:35.421 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:35.421 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:35.421 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1792497 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1792497 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1792497 ']' 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:35.422 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.422 [2024-10-08 20:58:04.068459] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:35.422 [2024-10-08 20:58:04.068669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.681 [2024-10-08 20:58:04.231870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.940 [2024-10-08 20:58:04.448802] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.940 [2024-10-08 20:58:04.448915] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.940 [2024-10-08 20:58:04.448951] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.940 [2024-10-08 20:58:04.448981] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.940 [2024-10-08 20:58:04.449008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.940 [2024-10-08 20:58:04.452499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.940 [2024-10-08 20:58:04.452596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.940 [2024-10-08 20:58:04.452646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.940 [2024-10-08 20:58:04.452657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.940 [2024-10-08 20:58:04.634426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.940 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.941 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.199 Malloc1 00:30:36.199 [2024-10-08 20:58:04.728753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.199 Malloc2 00:30:36.199 Malloc3 00:30:36.199 Malloc4 00:30:36.199 Malloc5 00:30:36.199 Malloc6 00:30:36.457 Malloc7 00:30:36.457 Malloc8 00:30:36.457 Malloc9 00:30:36.457 Malloc10 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1792680 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1792680 /var/tmp/bdevperf.sock 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1792680 ']' 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:36.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.457 { 00:30:36.457 "params": { 00:30:36.457 "name": "Nvme$subsystem", 00:30:36.457 "trtype": "$TEST_TRANSPORT", 00:30:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.457 "adrfam": "ipv4", 00:30:36.457 "trsvcid": "$NVMF_PORT", 00:30:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.457 "hdgst": ${hdgst:-false}, 00:30:36.457 "ddgst": ${ddgst:-false} 00:30:36.457 }, 00:30:36.457 "method": "bdev_nvme_attach_controller" 00:30:36.457 } 00:30:36.457 EOF 00:30:36.457 )") 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.457 { 00:30:36.457 "params": { 00:30:36.457 "name": "Nvme$subsystem", 00:30:36.457 "trtype": "$TEST_TRANSPORT", 00:30:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.457 "adrfam": "ipv4", 00:30:36.457 "trsvcid": "$NVMF_PORT", 00:30:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.457 "hdgst": ${hdgst:-false}, 00:30:36.457 "ddgst": ${ddgst:-false} 00:30:36.457 }, 00:30:36.457 "method": "bdev_nvme_attach_controller" 00:30:36.457 } 00:30:36.457 EOF 00:30:36.457 )") 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.457 { 00:30:36.457 "params": { 00:30:36.457 "name": "Nvme$subsystem", 00:30:36.457 "trtype": "$TEST_TRANSPORT", 00:30:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.457 "adrfam": "ipv4", 00:30:36.457 "trsvcid": "$NVMF_PORT", 00:30:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.457 "hdgst": ${hdgst:-false}, 00:30:36.457 "ddgst": ${ddgst:-false} 00:30:36.457 }, 00:30:36.457 "method": "bdev_nvme_attach_controller" 00:30:36.457 } 00:30:36.457 EOF 00:30:36.457 )") 00:30:36.457 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.715 { 00:30:36.715 "params": { 00:30:36.715 "name": "Nvme$subsystem", 00:30:36.715 "trtype": "$TEST_TRANSPORT", 00:30:36.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.715 "adrfam": "ipv4", 00:30:36.715 "trsvcid": "$NVMF_PORT", 00:30:36.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.715 "hdgst": ${hdgst:-false}, 00:30:36.715 "ddgst": ${ddgst:-false} 00:30:36.715 }, 00:30:36.715 "method": "bdev_nvme_attach_controller" 00:30:36.715 } 00:30:36.715 EOF 00:30:36.715 )") 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.715 { 00:30:36.715 "params": { 00:30:36.715 "name": "Nvme$subsystem", 00:30:36.715 "trtype": "$TEST_TRANSPORT", 00:30:36.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.715 "adrfam": "ipv4", 00:30:36.715 "trsvcid": "$NVMF_PORT", 00:30:36.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.715 "hdgst": ${hdgst:-false}, 00:30:36.715 "ddgst": ${ddgst:-false} 00:30:36.715 }, 00:30:36.715 "method": "bdev_nvme_attach_controller" 00:30:36.715 } 00:30:36.715 EOF 00:30:36.715 )") 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.715 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.715 { 00:30:36.715 "params": { 00:30:36.715 "name": "Nvme$subsystem", 00:30:36.715 "trtype": "$TEST_TRANSPORT", 00:30:36.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.715 "adrfam": "ipv4", 00:30:36.715 "trsvcid": "$NVMF_PORT", 00:30:36.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.715 "hdgst": ${hdgst:-false}, 00:30:36.715 "ddgst": ${ddgst:-false} 00:30:36.715 }, 00:30:36.715 "method": "bdev_nvme_attach_controller" 00:30:36.715 } 00:30:36.715 EOF 00:30:36.716 )") 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.716 { 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme$subsystem", 00:30:36.716 "trtype": "$TEST_TRANSPORT", 00:30:36.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "$NVMF_PORT", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.716 "hdgst": ${hdgst:-false}, 00:30:36.716 "ddgst": ${ddgst:-false} 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 } 00:30:36.716 EOF 00:30:36.716 )") 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.716 { 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme$subsystem", 00:30:36.716 "trtype": "$TEST_TRANSPORT", 00:30:36.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "$NVMF_PORT", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.716 "hdgst": ${hdgst:-false}, 00:30:36.716 "ddgst": ${ddgst:-false} 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 } 00:30:36.716 EOF 00:30:36.716 )") 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.716 { 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme$subsystem", 00:30:36.716 "trtype": "$TEST_TRANSPORT", 00:30:36.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "$NVMF_PORT", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.716 "hdgst": ${hdgst:-false}, 00:30:36.716 "ddgst": ${ddgst:-false} 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 } 00:30:36.716 EOF 00:30:36.716 )") 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:36.716 { 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme$subsystem", 00:30:36.716 "trtype": "$TEST_TRANSPORT", 00:30:36.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "$NVMF_PORT", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.716 "hdgst": ${hdgst:-false}, 00:30:36.716 "ddgst": ${ddgst:-false} 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 } 00:30:36.716 EOF 00:30:36.716 )") 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:30:36.716 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme1", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme2", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme3", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme4", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme5", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme6", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme7", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme8", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme9", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 },{ 00:30:36.716 "params": { 00:30:36.716 "name": "Nvme10", 00:30:36.716 "trtype": "tcp", 00:30:36.716 "traddr": "10.0.0.2", 00:30:36.716 "adrfam": "ipv4", 00:30:36.716 "trsvcid": "4420", 00:30:36.716 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:36.716 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:36.716 "hdgst": false, 00:30:36.716 "ddgst": false 00:30:36.716 }, 00:30:36.716 "method": "bdev_nvme_attach_controller" 00:30:36.716 }' 00:30:36.716 [2024-10-08 20:58:05.260437] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:36.716 [2024-10-08 20:58:05.260523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792680 ] 00:30:36.716 [2024-10-08 20:58:05.327562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.716 [2024-10-08 20:58:05.440295] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.614 Running I/O for 10 seconds... 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=16 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 16 -ge 100 ']' 00:30:38.873 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1792680 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1792680 ']' 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1792680 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1792680 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1792680' 00:30:39.131 killing process with pid 1792680 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1792680 00:30:39.131 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1792680 00:30:39.389 Received shutdown signal, test time was about 0.832326 seconds 00:30:39.389 00:30:39.389 Latency(us) 00:30:39.389 [2024-10-08T18:58:08.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme1n1 : 0.81 235.97 14.75 0.00 0.00 266997.76 20000.62 242337.56 00:30:39.389 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme2n1 : 0.81 237.17 14.82 0.00 0.00 259048.23 32428.18 243891.01 00:30:39.389 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme3n1 : 0.80 238.91 14.93 0.00 0.00 250850.92 17282.09 265639.25 00:30:39.389 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme4n1 : 0.80 239.99 15.00 0.00 0.00 243861.18 30680.56 254765.13 00:30:39.389 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme5n1 : 0.79 162.18 10.14 0.00 0.00 350635.43 21554.06 307582.29 00:30:39.389 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme6n1 : 0.83 231.95 14.50 0.00 0.00 240469.84 20583.16 267192.70 00:30:39.389 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme7n1 : 0.82 235.09 14.69 0.00 0.00 230179.02 36311.80 260978.92 00:30:39.389 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme8n1 : 0.82 233.46 14.59 0.00 0.00 226426.88 20583.16 268746.15 00:30:39.389 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme9n1 : 0.83 230.91 14.43 0.00 0.00 223447.92 21068.61 262532.36 00:30:39.389 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.389 Verification LBA range: start 0x0 length 0x400 00:30:39.389 Nvme10n1 : 0.78 163.98 10.25 0.00 0.00 299188.53 28932.93 278066.82 00:30:39.389 [2024-10-08T18:58:08.152Z] =================================================================================================================== 00:30:39.390 [2024-10-08T18:58:08.153Z] Total : 2209.62 138.10 0.00 0.00 254410.47 17282.09 307582.29 00:30:39.647 20:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.579 rmmod nvme_tcp 00:30:40.579 rmmod nvme_fabrics 00:30:40.579 rmmod nvme_keyring 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1792497 ']' 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1792497 ']' 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1792497' 00:30:40.579 killing process with pid 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1792497 00:30:40.579 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1792497 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.514 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.420 00:30:43.420 real 0m8.309s 00:30:43.420 user 0m24.802s 00:30:43.420 sys 0m1.704s 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:43.420 ************************************ 00:30:43.420 END TEST nvmf_shutdown_tc2 00:30:43.420 ************************************ 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:43.420 ************************************ 00:30:43.420 START TEST nvmf_shutdown_tc3 00:30:43.420 ************************************ 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:43.420 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:43.420 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:43.420 Found net devices under 0000:84:00.0: cvl_0_0 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:43.420 Found net devices under 0000:84:00.1: cvl_0_1 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.420 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.421 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:30:43.680 00:30:43.680 --- 10.0.0.2 ping statistics --- 00:30:43.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.680 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:43.680 00:30:43.680 --- 10.0.0.1 ping statistics --- 00:30:43.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.680 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.680 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1793587 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1793587 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1793587 ']' 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.681 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.681 [2024-10-08 20:58:12.417887] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:43.681 [2024-10-08 20:58:12.418042] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.940 [2024-10-08 20:58:12.575337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.199 [2024-10-08 20:58:12.795251] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.199 [2024-10-08 20:58:12.795368] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.199 [2024-10-08 20:58:12.795405] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.199 [2024-10-08 20:58:12.795436] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.199 [2024-10-08 20:58:12.795463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.199 [2024-10-08 20:58:12.799151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.199 [2024-10-08 20:58:12.799247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.199 [2024-10-08 20:58:12.799303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:44.199 [2024-10-08 20:58:12.799307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.131 [2024-10-08 20:58:13.872746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.131 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.389 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.389 Malloc1 00:30:45.389 [2024-10-08 20:58:13.965958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.389 Malloc2 00:30:45.389 Malloc3 00:30:45.389 Malloc4 00:30:45.389 Malloc5 00:30:45.647 Malloc6 00:30:45.647 Malloc7 00:30:45.647 Malloc8 00:30:45.647 Malloc9 00:30:45.647 Malloc10 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1793903 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1793903 /var/tmp/bdevperf.sock 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1793903 ']' 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.914 { 00:30:45.914 "params": { 00:30:45.914 "name": "Nvme$subsystem", 00:30:45.914 "trtype": "$TEST_TRANSPORT", 00:30:45.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.914 "adrfam": "ipv4", 00:30:45.914 "trsvcid": "$NVMF_PORT", 00:30:45.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.914 "hdgst": ${hdgst:-false}, 00:30:45.914 "ddgst": ${ddgst:-false} 00:30:45.914 }, 00:30:45.914 "method": "bdev_nvme_attach_controller" 00:30:45.914 } 00:30:45.914 EOF 00:30:45.914 )") 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.914 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.914 { 00:30:45.914 "params": { 00:30:45.914 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.915 "hdgst": ${hdgst:-false}, 00:30:45.915 "ddgst": ${ddgst:-false} 00:30:45.915 }, 00:30:45.915 "method": "bdev_nvme_attach_controller" 00:30:45.915 } 00:30:45.915 EOF 00:30:45.915 )") 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.915 { 00:30:45.915 "params": { 00:30:45.915 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.915 "hdgst": ${hdgst:-false}, 00:30:45.915 "ddgst": ${ddgst:-false} 00:30:45.915 }, 00:30:45.915 "method": "bdev_nvme_attach_controller" 00:30:45.915 } 00:30:45.915 EOF 00:30:45.915 )") 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.915 { 00:30:45.915 "params": { 00:30:45.915 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.915 "hdgst": ${hdgst:-false}, 00:30:45.915 "ddgst": ${ddgst:-false} 00:30:45.915 }, 00:30:45.915 "method": "bdev_nvme_attach_controller" 00:30:45.915 } 00:30:45.915 EOF 00:30:45.915 )") 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.915 { 00:30:45.915 "params": { 00:30:45.915 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.915 "hdgst": ${hdgst:-false}, 00:30:45.915 "ddgst": ${ddgst:-false} 00:30:45.915 }, 00:30:45.915 "method": "bdev_nvme_attach_controller" 00:30:45.915 } 00:30:45.915 EOF 00:30:45.915 )") 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.915 { 00:30:45.915 "params": { 00:30:45.915 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.915 "hdgst": ${hdgst:-false}, 00:30:45.915 "ddgst": ${ddgst:-false} 00:30:45.915 }, 00:30:45.915 "method": "bdev_nvme_attach_controller" 00:30:45.915 } 00:30:45.915 EOF 00:30:45.915 )") 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.915 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.915 { 00:30:45.915 "params": { 00:30:45.915 "name": "Nvme$subsystem", 00:30:45.915 "trtype": "$TEST_TRANSPORT", 00:30:45.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.915 "adrfam": "ipv4", 00:30:45.915 "trsvcid": "$NVMF_PORT", 00:30:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.916 "hdgst": ${hdgst:-false}, 00:30:45.916 "ddgst": ${ddgst:-false} 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 } 00:30:45.916 EOF 00:30:45.916 )") 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.916 { 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme$subsystem", 00:30:45.916 "trtype": "$TEST_TRANSPORT", 00:30:45.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "$NVMF_PORT", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.916 "hdgst": ${hdgst:-false}, 00:30:45.916 "ddgst": ${ddgst:-false} 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 } 00:30:45.916 EOF 00:30:45.916 )") 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.916 { 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme$subsystem", 00:30:45.916 "trtype": "$TEST_TRANSPORT", 00:30:45.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "$NVMF_PORT", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.916 "hdgst": ${hdgst:-false}, 00:30:45.916 "ddgst": ${ddgst:-false} 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 } 00:30:45.916 EOF 00:30:45.916 )") 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.916 { 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme$subsystem", 00:30:45.916 "trtype": "$TEST_TRANSPORT", 00:30:45.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "$NVMF_PORT", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.916 "hdgst": ${hdgst:-false}, 00:30:45.916 "ddgst": ${ddgst:-false} 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 } 00:30:45.916 EOF 00:30:45.916 )") 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:30:45.916 20:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme1", 00:30:45.916 "trtype": "tcp", 00:30:45.916 "traddr": "10.0.0.2", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "4420", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.916 "hdgst": false, 00:30:45.916 "ddgst": false 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 },{ 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme2", 00:30:45.916 "trtype": "tcp", 00:30:45.916 "traddr": "10.0.0.2", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "4420", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:45.916 "hdgst": false, 00:30:45.916 "ddgst": false 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.916 },{ 00:30:45.916 "params": { 00:30:45.916 "name": "Nvme3", 00:30:45.916 "trtype": "tcp", 00:30:45.916 "traddr": "10.0.0.2", 00:30:45.916 "adrfam": "ipv4", 00:30:45.916 "trsvcid": "4420", 00:30:45.916 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:45.916 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:45.916 "hdgst": false, 00:30:45.916 "ddgst": false 00:30:45.916 }, 00:30:45.916 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme4", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme5", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme6", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme7", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme8", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme9", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 },{ 00:30:45.917 "params": { 00:30:45.917 "name": "Nvme10", 00:30:45.917 "trtype": "tcp", 00:30:45.917 "traddr": "10.0.0.2", 00:30:45.917 "adrfam": "ipv4", 00:30:45.917 "trsvcid": "4420", 00:30:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:45.917 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:45.917 "hdgst": false, 00:30:45.917 "ddgst": false 00:30:45.917 }, 00:30:45.917 "method": "bdev_nvme_attach_controller" 00:30:45.917 }' 00:30:45.917 [2024-10-08 20:58:14.502911] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:45.917 [2024-10-08 20:58:14.503011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793903 ] 00:30:45.917 [2024-10-08 20:58:14.578357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.180 [2024-10-08 20:58:14.695456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.552 Running I/O for 10 seconds... 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.484 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:48.485 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.485 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:48.485 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:48.485 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1793587 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1793587 ']' 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1793587 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1793587 00:30:48.759 1668.00 IOPS, 104.25 MiB/s [2024-10-08T18:58:17.522Z] 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1793587' 00:30:48.759 killing process with pid 1793587 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1793587 00:30:48.759 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1793587 00:30:48.759 [2024-10-08 20:58:17.354010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.759 [2024-10-08 20:58:17.354662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.354907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb69a0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.356940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.760 [2024-10-08 20:58:17.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.760 [2024-10-08 20:58:17.356997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.760 [2024-10-08 20:58:17.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.760 [2024-10-08 20:58:17.357028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.760 [2024-10-08 20:58:17.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.760 [2024-10-08 20:58:17.357057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.760 [2024-10-08 20:58:17.357070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.760 [2024-10-08 20:58:17.357083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f3db0 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.357991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.760 [2024-10-08 20:58:17.358166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.358179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.358191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5a10 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.359994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.360721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6e70 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.361702] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.761 [2024-10-08 20:58:17.362470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.761 [2024-10-08 20:58:17.362827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.362997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with t[2024-10-08 20:58:17.363224] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.762 he state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.363371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7340 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.364988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.762 [2024-10-08 20:58:17.365371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.365574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7830 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.366988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.763 [2024-10-08 20:58:17.367275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.367288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.367299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.367311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.367323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.367335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7d00 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.369994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.370188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8550 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.764 [2024-10-08 20:58:17.371534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.371988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.372112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8a40 is same with the state(6) to be set 00:30:48.765 [2024-10-08 20:58:17.382520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.382990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.765 [2024-10-08 20:58:17.383319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.765 [2024-10-08 20:58:17.383334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.383984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.383999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.766 [2024-10-08 20:58:17.384639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.766 [2024-10-08 20:58:17.384661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.384743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.767 [2024-10-08 20:58:17.384831] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25f9fc0 was disconnected and freed. reset controller. 00:30:48.767 [2024-10-08 20:58:17.384996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644f50 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.385171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641a30 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.385324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3db0 (9): Bad file descriptor 00:30:48.767 [2024-10-08 20:58:17.385378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e91e0 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.385553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ce10 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.385734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611ba0 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.385913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.385981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.385994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215c1e0 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.386082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611580 is same with the state(6) to be set 00:30:48.767 [2024-10-08 20:58:17.386244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.767 [2024-10-08 20:58:17.386339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.767 [2024-10-08 20:58:17.386353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ea970 is same with the state(6) to be set 00:30:48.768 [2024-10-08 20:58:17.386411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.768 [2024-10-08 20:58:17.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.768 [2024-10-08 20:58:17.386471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.768 [2024-10-08 20:58:17.386498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.768 [2024-10-08 20:58:17.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266da40 is same with the state(6) to be set 00:30:48.768 [2024-10-08 20:58:17.386937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.386961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.386983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.386998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.387977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.387992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.388008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.388023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.388044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.388060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.388076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.388091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.768 [2024-10-08 20:58:17.388107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.768 [2024-10-08 20:58:17.388122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.388966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.769 [2024-10-08 20:58:17.388981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.769 [2024-10-08 20:58:17.389076] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25f6040 was disconnected and freed. reset controller. 00:30:48.769 [2024-10-08 20:58:17.390524] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.392031] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:48.769 [2024-10-08 20:58:17.392066] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.769 [2024-10-08 20:58:17.392094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215c1e0 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.392117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644f50 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.393349] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.393426] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.393623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.769 [2024-10-08 20:58:17.393662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2644f50 with addr=10.0.0.2, port=4420 00:30:48.769 [2024-10-08 20:58:17.393683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644f50 is same with the state(6) to be set 00:30:48.769 [2024-10-08 20:58:17.393781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.769 [2024-10-08 20:58:17.393807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215c1e0 with addr=10.0.0.2, port=4420 00:30:48.769 [2024-10-08 20:58:17.393824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215c1e0 is same with the state(6) to be set 00:30:48.769 [2024-10-08 20:58:17.393897] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.393974] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.394047] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.769 [2024-10-08 20:58:17.394134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644f50 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.394161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215c1e0 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.394260] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:48.769 [2024-10-08 20:58:17.394282] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:48.769 [2024-10-08 20:58:17.394299] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:48.769 [2024-10-08 20:58:17.394339] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.769 [2024-10-08 20:58:17.394355] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.769 [2024-10-08 20:58:17.394368] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.769 [2024-10-08 20:58:17.394431] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.769 [2024-10-08 20:58:17.394450] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.769 [2024-10-08 20:58:17.394936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641a30 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.394983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e91e0 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.395016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266ce10 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.395047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2611ba0 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.395078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2611580 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.395109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ea970 (9): Bad file descriptor 00:30:48.769 [2024-10-08 20:58:17.395138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266da40 (9): Bad file descriptor 00:30:48.770 [2024-10-08 20:58:17.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.395971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.395985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.770 [2024-10-08 20:58:17.396624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.770 [2024-10-08 20:58:17.396639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.396976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.396990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.397333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.397348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e0880 is same with the state(6) to be set 00:30:48.771 [2024-10-08 20:58:17.398624] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.771 [2024-10-08 20:58:17.398915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.771 [2024-10-08 20:58:17.398944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f3db0 with addr=10.0.0.2, port=4420 00:30:48.771 [2024-10-08 20:58:17.398962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f3db0 is same with the state(6) to be set 00:30:48.771 [2024-10-08 20:58:17.399301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3db0 (9): Bad file descriptor 00:30:48.771 [2024-10-08 20:58:17.399373] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.771 [2024-10-08 20:58:17.399393] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.771 [2024-10-08 20:58:17.399420] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.771 [2024-10-08 20:58:17.399490] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.771 [2024-10-08 20:58:17.402970] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.771 [2024-10-08 20:58:17.402999] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:48.771 [2024-10-08 20:58:17.403215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.771 [2024-10-08 20:58:17.403243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215c1e0 with addr=10.0.0.2, port=4420 00:30:48.771 [2024-10-08 20:58:17.403261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215c1e0 is same with the state(6) to be set 00:30:48.771 [2024-10-08 20:58:17.403408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.771 [2024-10-08 20:58:17.403433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2644f50 with addr=10.0.0.2, port=4420 00:30:48.771 [2024-10-08 20:58:17.403450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644f50 is same with the state(6) to be set 00:30:48.771 [2024-10-08 20:58:17.403508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215c1e0 (9): Bad file descriptor 00:30:48.771 [2024-10-08 20:58:17.403531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644f50 (9): Bad file descriptor 00:30:48.771 [2024-10-08 20:58:17.403584] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.771 [2024-10-08 20:58:17.403601] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.771 [2024-10-08 20:58:17.403615] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.771 [2024-10-08 20:58:17.403635] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:48.771 [2024-10-08 20:58:17.403665] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:48.771 [2024-10-08 20:58:17.403682] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:48.771 [2024-10-08 20:58:17.403739] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.771 [2024-10-08 20:58:17.403757] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.771 [2024-10-08 20:58:17.405133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.771 [2024-10-08 20:58:17.405365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.771 [2024-10-08 20:58:17.405380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.405978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.405993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.772 [2024-10-08 20:58:17.406520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.772 [2024-10-08 20:58:17.406537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.406977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.406993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.407172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.407187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e1a70 is same with the state(6) to be set 00:30:48.773 [2024-10-08 20:58:17.408468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.408972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.408986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.773 [2024-10-08 20:58:17.409160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.773 [2024-10-08 20:58:17.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.409970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.409984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.774 [2024-10-08 20:58:17.410464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.774 [2024-10-08 20:58:17.410478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.410493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f84e0 is same with the state(6) to be set 00:30:48.775 [2024-10-08 20:58:17.411798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.411984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.411999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.412971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.412988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.413002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.413019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.413033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.775 [2024-10-08 20:58:17.413054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.775 [2024-10-08 20:58:17.413069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.413802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9910 is same with the state(6) to be set 00:30:48.776 [2024-10-08 20:58:17.415064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.776 [2024-10-08 20:58:17.415465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.776 [2024-10-08 20:58:17.415484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.415979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.415996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.777 [2024-10-08 20:58:17.416796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.777 [2024-10-08 20:58:17.416813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.416981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.416997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.417011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.417042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.417076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.417091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f4b30 is same with the state(6) to be set 00:30:48.778 [2024-10-08 20:58:17.418335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.418972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.778 [2024-10-08 20:58:17.419346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.778 [2024-10-08 20:58:17.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.419390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.426983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.426997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.427503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f75a0 is same with the state(6) to be set 00:30:48.779 [2024-10-08 20:58:17.428880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.428905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.428946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.428962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.428976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.428993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.779 [2024-10-08 20:58:17.429196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.779 [2024-10-08 20:58:17.429213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.429977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.429994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.780 [2024-10-08 20:58:17.430239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.780 [2024-10-08 20:58:17.430253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.430887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.430903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8b40 is same with the state(6) to be set 00:30:48.781 [2024-10-08 20:58:17.432156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.781 [2024-10-08 20:58:17.432728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.781 [2024-10-08 20:58:17.432744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.432969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.433986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.433999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.434016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.782 [2024-10-08 20:58:17.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.782 [2024-10-08 20:58:17.434046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.783 [2024-10-08 20:58:17.434061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.783 [2024-10-08 20:58:17.434076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.783 [2024-10-08 20:58:17.434090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.783 [2024-10-08 20:58:17.434106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.783 [2024-10-08 20:58:17.434121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.783 [2024-10-08 20:58:17.434137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.783 [2024-10-08 20:58:17.434152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.783 [2024-10-08 20:58:17.434166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fb500 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.435374] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:48.783 [2024-10-08 20:58:17.435406] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:48.783 [2024-10-08 20:58:17.435426] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:48.783 [2024-10-08 20:58:17.435443] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:48.783 [2024-10-08 20:58:17.435571] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.435600] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.435622] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.435732] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:48.783 [2024-10-08 20:58:17.435758] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:48.783 task offset: 27520 on job bdev=Nvme9n1 fails 00:30:48.783 00:30:48.783 Latency(us) 00:30:48.783 [2024-10-08T18:58:17.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.783 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme1n1 ended in about 1.09 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme1n1 : 1.09 117.18 7.32 58.59 0.00 360409.82 21068.61 285834.05 00:30:48.783 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme2n1 ended in about 1.10 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme2n1 : 1.10 177.84 11.12 58.07 0.00 263694.83 32622.36 248551.35 00:30:48.783 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme3n1 ended in about 1.11 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme3n1 : 1.11 173.69 10.86 57.90 0.00 263666.73 17087.91 267192.70 00:30:48.783 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme4n1 ended in about 1.11 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme4n1 : 1.11 173.18 10.82 57.73 0.00 259595.00 25631.86 264085.81 00:30:48.783 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme5n1 ended in about 1.11 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme5n1 : 1.11 172.67 10.79 57.56 0.00 255495.59 33399.09 251658.24 00:30:48.783 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme6n1 ended in about 1.09 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme6n1 : 1.09 176.85 11.05 58.95 0.00 244249.51 9077.95 287387.50 00:30:48.783 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme7n1 ended in about 1.12 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme7n1 : 1.12 171.06 10.69 57.02 0.00 248741.17 18641.35 273406.48 00:30:48.783 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme8n1 ended in about 1.13 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme8n1 : 1.13 170.56 10.66 56.85 0.00 244933.40 21651.15 251658.24 00:30:48.783 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme9n1 ended in about 1.08 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme9n1 : 1.08 177.09 11.07 59.03 0.00 229926.02 7136.14 288940.94 00:30:48.783 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.783 Job: Nvme10n1 ended in about 1.13 seconds with error 00:30:48.783 Verification LBA range: start 0x0 length 0x400 00:30:48.783 Nvme10n1 : 1.13 113.38 7.09 56.69 0.00 315242.07 20971.52 296708.17 00:30:48.783 [2024-10-08T18:58:17.546Z] =================================================================================================================== 00:30:48.783 [2024-10-08T18:58:17.546Z] Total : 1623.50 101.47 578.38 0.00 264949.64 7136.14 296708.17 00:30:48.783 [2024-10-08 20:58:17.471008] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:48.783 [2024-10-08 20:58:17.471101] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:48.783 [2024-10-08 20:58:17.471412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.471448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ea970 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.471469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ea970 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.471620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.471648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e91e0 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.471673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e91e0 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.471798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.471825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266da40 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.471855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266da40 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.472007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.472035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2611ba0 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.472051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611ba0 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.473982] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.783 [2024-10-08 20:58:17.474014] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:48.783 [2024-10-08 20:58:17.474226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.474256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2611580 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.474273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611580 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.474398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.474425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266ce10 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.474442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ce10 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.474587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.474614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641a30 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.474630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641a30 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.474663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ea970 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.474688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e91e0 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.474708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266da40 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.474729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2611ba0 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.474782] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.474809] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.474829] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.474848] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.474868] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.783 [2024-10-08 20:58:17.474964] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.783 [2024-10-08 20:58:17.475147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.475175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f3db0 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.475192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f3db0 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.475294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.783 [2024-10-08 20:58:17.475321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2644f50 with addr=10.0.0.2, port=4420 00:30:48.783 [2024-10-08 20:58:17.475343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644f50 is same with the state(6) to be set 00:30:48.783 [2024-10-08 20:58:17.475362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2611580 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.475382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266ce10 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.475401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641a30 (9): Bad file descriptor 00:30:48.783 [2024-10-08 20:58:17.475418] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.783 [2024-10-08 20:58:17.475432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:48.783 [2024-10-08 20:58:17.475448] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.783 [2024-10-08 20:58:17.475480] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:48.783 [2024-10-08 20:58:17.475494] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.475508] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:48.784 [2024-10-08 20:58:17.475525] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.475541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.475555] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:48.784 [2024-10-08 20:58:17.475572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.475586] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.475599] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:48.784 [2024-10-08 20:58:17.475711] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.475734] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.475747] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.475759] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.475949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-10-08 20:58:17.475974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215c1e0 with addr=10.0.0.2, port=4420 00:30:48.784 [2024-10-08 20:58:17.475991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215c1e0 is same with the state(6) to be set 00:30:48.784 [2024-10-08 20:58:17.476010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3db0 (9): Bad file descriptor 00:30:48.784 [2024-10-08 20:58:17.476029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644f50 (9): Bad file descriptor 00:30:48.784 [2024-10-08 20:58:17.476046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476059] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476100] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476114] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476132] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476158] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476172] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476186] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476224] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.476242] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.476254] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.476272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215c1e0 (9): Bad file descriptor 00:30:48.784 [2024-10-08 20:58:17.476290] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476303] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476317] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476338] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476353] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476367] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476405] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.476422] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.784 [2024-10-08 20:58:17.476436] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.784 [2024-10-08 20:58:17.476449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.784 [2024-10-08 20:58:17.476463] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.784 [2024-10-08 20:58:17.476501] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.351 20:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1793903 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1793903 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1793903 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.733 rmmod nvme_tcp 00:30:50.733 rmmod nvme_fabrics 00:30:50.733 rmmod nvme_keyring 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1793587 ']' 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1793587 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1793587 ']' 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1793587 00:30:50.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1793587) - No such process 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1793587 is not found' 00:30:50.733 Process with pid 1793587 is not found 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:50.733 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.734 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.644 00:30:52.644 real 0m9.156s 00:30:52.644 user 0m24.576s 00:30:52.644 sys 0m1.860s 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.644 ************************************ 00:30:52.644 END TEST nvmf_shutdown_tc3 00:30:52.644 ************************************ 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:52.644 ************************************ 00:30:52.644 START TEST nvmf_shutdown_tc4 00:30:52.644 ************************************ 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:52.644 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:52.644 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:52.644 Found net devices under 0000:84:00.0: cvl_0_0 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.644 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:52.645 Found net devices under 0000:84:00.1: cvl_0_1 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.645 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:52.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:30:52.906 00:30:52.906 --- 10.0.0.2 ping statistics --- 00:30:52.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.906 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:52.906 00:30:52.906 --- 10.0.0.1 ping statistics --- 00:30:52.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.906 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1794798 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1794798 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1794798 ']' 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.906 20:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:53.167 [2024-10-08 20:58:21.671456] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:53.167 [2024-10-08 20:58:21.671632] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.167 [2024-10-08 20:58:21.829132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:53.426 [2024-10-08 20:58:22.035243] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.426 [2024-10-08 20:58:22.035351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.426 [2024-10-08 20:58:22.035388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.426 [2024-10-08 20:58:22.035424] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.426 [2024-10-08 20:58:22.035453] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.426 [2024-10-08 20:58:22.039152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.426 [2024-10-08 20:58:22.039253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:53.426 [2024-10-08 20:58:22.039326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:53.426 [2024-10-08 20:58:22.039329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.426 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.426 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:30:53.426 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:53.426 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:53.426 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:53.684 [2024-10-08 20:58:22.212402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.684 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:53.684 Malloc1 00:30:53.684 [2024-10-08 20:58:22.302753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.684 Malloc2 00:30:53.684 Malloc3 00:30:53.684 Malloc4 00:30:53.942 Malloc5 00:30:53.942 Malloc6 00:30:53.942 Malloc7 00:30:53.942 Malloc8 00:30:53.942 Malloc9 00:30:54.199 Malloc10 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1794984 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:54.199 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:54.199 [2024-10-08 20:58:22.829430] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1794798 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1794798 ']' 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1794798 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1794798 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1794798' 00:30:59.561 killing process with pid 1794798 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1794798 00:30:59.561 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1794798 00:30:59.561 [2024-10-08 20:58:27.838937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57990 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57990 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57990 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.839954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e60 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 [2024-10-08 20:58:27.840606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 [2024-10-08 20:58:27.840642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.561 he state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.561 he state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 [2024-10-08 20:58:27.840717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 [2024-10-08 20:58:27.840743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 [2024-10-08 20:58:27.840755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.561 he state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.561 he state(6) to be set 00:30:59.561 starting I/O failed: -6 00:30:59.561 [2024-10-08 20:58:27.840803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 [2024-10-08 20:58:27.840816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 [2024-10-08 20:58:27.840828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58330 is same with the state(6) to be set 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 starting I/O failed: -6 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.561 [2024-10-08 20:58:27.840991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.561 NVMe io qpair process completion error 00:30:59.561 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 [2024-10-08 20:58:27.841316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd574c0 is same with the state(6) to be set 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 [2024-10-08 20:58:27.841346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd574c0 is same with the state(6) to be set 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 [2024-10-08 20:58:27.842127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 [2024-10-08 20:58:27.843309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.562 starting I/O failed: -6 00:30:59.562 starting I/O failed: -6 00:30:59.562 starting I/O failed: -6 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 [2024-10-08 20:58:27.844764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.562 starting I/O failed: -6 00:30:59.562 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 [2024-10-08 20:58:27.846712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.563 NVMe io qpair process completion error 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 [2024-10-08 20:58:27.848769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81230 is same with the state(6) to be set 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 [2024-10-08 20:58:27.848805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81230 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.563 he state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.848823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81230 is same with the state(6) to be set 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 [2024-10-08 20:58:27.849324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.563 starting I/O failed: -6 00:30:59.563 starting I/O failed: -6 00:30:59.563 [2024-10-08 20:58:27.849940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.849974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.849990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 [2024-10-08 20:58:27.850139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe803c0 is same with the state(6) to be set 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 starting I/O failed: -6 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.563 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.851387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.564 NVMe io qpair process completion error 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 [2024-10-08 20:58:27.855452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.855492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with the state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.855510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.564 he state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.855525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.855537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with tstarting I/O failed: -6 00:30:59.564 he state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.855551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.855562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe820a0 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.855998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.564 he state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.856032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.856049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 starting I/O failed: -6 00:30:59.564 [2024-10-08 20:58:27.856064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.856076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.564 he state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.856090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.856102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.856114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with tWrite completed with error (sct=0, sc=8) 00:30:59.564 he state(6) to be set 00:30:59.564 [2024-10-08 20:58:27.856127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.856140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe81700 is same with the state(6) to be set 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.856287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 [2024-10-08 20:58:27.857375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.564 starting I/O failed: -6 00:30:59.564 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 [2024-10-08 20:58:27.858858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 [2024-10-08 20:58:27.860951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.565 NVMe io qpair process completion error 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 [2024-10-08 20:58:27.862311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 starting I/O failed: -6 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.565 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 [2024-10-08 20:58:27.863550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 [2024-10-08 20:58:27.864922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.566 starting I/O failed: -6 00:30:59.566 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 [2024-10-08 20:58:27.867706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.567 NVMe io qpair process completion error 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 [2024-10-08 20:58:27.869198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.567 starting I/O failed: -6 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 [2024-10-08 20:58:27.870387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.567 starting I/O failed: -6 00:30:59.567 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 [2024-10-08 20:58:27.871820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 [2024-10-08 20:58:27.875951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.568 NVMe io qpair process completion error 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 Write completed with error (sct=0, sc=8) 00:30:59.568 starting I/O failed: -6 00:30:59.569 [2024-10-08 20:58:27.877385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.569 starting I/O failed: -6 00:30:59.569 starting I/O failed: -6 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 [2024-10-08 20:58:27.878645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 [2024-10-08 20:58:27.880052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.569 starting I/O failed: -6 00:30:59.569 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 [2024-10-08 20:58:27.883755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.570 NVMe io qpair process completion error 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 [2024-10-08 20:58:27.885168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 [2024-10-08 20:58:27.886250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.570 starting I/O failed: -6 00:30:59.570 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 [2024-10-08 20:58:27.887647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 [2024-10-08 20:58:27.889933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.571 NVMe io qpair process completion error 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 [2024-10-08 20:58:27.891451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:59.571 starting I/O failed: -6 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 starting I/O failed: -6 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.571 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 [2024-10-08 20:58:27.892670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 [2024-10-08 20:58:27.894092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.572 Write completed with error (sct=0, sc=8) 00:30:59.572 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 starting I/O failed: -6 00:30:59.573 [2024-10-08 20:58:27.897066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.573 NVMe io qpair process completion error 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.573 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 Write completed with error (sct=0, sc=8) 00:30:59.574 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 [2024-10-08 20:58:27.909308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 Write completed with error (sct=0, sc=8) 00:30:59.575 starting I/O failed: -6 00:30:59.575 [2024-10-08 20:58:27.913046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:59.575 NVMe io qpair process completion error 00:30:59.575 Initializing NVMe Controllers 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:59.575 Controller IO queue size 128, less than required. 00:30:59.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:59.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:59.575 Initialization complete. Launching workers. 00:30:59.575 ======================================================== 00:30:59.575 Latency(us) 00:30:59.575 Device Information : IOPS MiB/s Average min max 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1703.74 73.21 75141.88 948.18 140426.63 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1703.95 73.22 75157.54 1262.74 138917.97 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1702.47 73.15 75261.16 936.90 138296.22 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1709.24 73.44 75021.60 942.99 134105.03 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1701.63 73.12 75405.98 889.07 132444.36 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1704.80 73.25 75297.06 881.93 132702.77 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1629.98 70.04 78687.06 1108.99 131332.06 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1664.85 71.54 76155.54 1006.72 130649.86 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1666.54 71.61 76743.64 884.84 153592.73 00:30:59.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1680.07 72.19 76356.68 1009.65 130918.00 00:30:59.576 ======================================================== 00:30:59.576 Total : 16867.27 724.77 75907.54 881.93 153592.73 00:30:59.576 00:30:59.576 [2024-10-08 20:58:27.918344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd37bb0 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c040 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35ab0 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35de0 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35780 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c370 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd379d0 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd377f0 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bd10 is same with the state(6) to be set 00:30:59.576 [2024-10-08 20:58:27.918994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c6a0 is same with the state(6) to be set 00:30:59.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:59.835 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1794984 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1794984 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1794984 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:00.770 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.029 rmmod nvme_tcp 00:31:01.029 rmmod nvme_fabrics 00:31:01.029 rmmod nvme_keyring 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1794798 ']' 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1794798 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1794798 ']' 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1794798 00:31:01.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1794798) - No such process 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1794798 is not found' 00:31:01.029 Process with pid 1794798 is not found 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.029 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.934 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:02.934 00:31:02.934 real 0m10.360s 00:31:02.934 user 0m25.119s 00:31:02.934 sys 0m6.306s 00:31:02.934 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.934 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:02.934 ************************************ 00:31:02.934 END TEST nvmf_shutdown_tc4 00:31:02.934 ************************************ 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:31:03.192 00:31:03.192 real 0m42.071s 00:31:03.192 user 1m52.541s 00:31:03.192 sys 0m14.358s 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:03.192 ************************************ 00:31:03.192 END TEST nvmf_shutdown 00:31:03.192 ************************************ 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:03.192 00:31:03.192 real 15m49.325s 00:31:03.192 user 36m59.464s 00:31:03.192 sys 3m28.133s 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.192 20:58:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:03.192 ************************************ 00:31:03.192 END TEST nvmf_target_extra 00:31:03.192 ************************************ 00:31:03.192 20:58:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:03.192 20:58:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.192 20:58:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.192 20:58:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.192 ************************************ 00:31:03.192 START TEST nvmf_host 00:31:03.192 ************************************ 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:03.192 * Looking for test storage... 00:31:03.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:03.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.192 --rc genhtml_branch_coverage=1 00:31:03.192 --rc genhtml_function_coverage=1 00:31:03.192 --rc genhtml_legend=1 00:31:03.192 --rc geninfo_all_blocks=1 00:31:03.192 --rc geninfo_unexecuted_blocks=1 00:31:03.192 00:31:03.192 ' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:03.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.192 --rc genhtml_branch_coverage=1 00:31:03.192 --rc genhtml_function_coverage=1 00:31:03.192 --rc genhtml_legend=1 00:31:03.192 --rc geninfo_all_blocks=1 00:31:03.192 --rc geninfo_unexecuted_blocks=1 00:31:03.192 00:31:03.192 ' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:03.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.192 --rc genhtml_branch_coverage=1 00:31:03.192 --rc genhtml_function_coverage=1 00:31:03.192 --rc genhtml_legend=1 00:31:03.192 --rc geninfo_all_blocks=1 00:31:03.192 --rc geninfo_unexecuted_blocks=1 00:31:03.192 00:31:03.192 ' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:03.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.192 --rc genhtml_branch_coverage=1 00:31:03.192 --rc genhtml_function_coverage=1 00:31:03.192 --rc genhtml_legend=1 00:31:03.192 --rc geninfo_all_blocks=1 00:31:03.192 --rc geninfo_unexecuted_blocks=1 00:31:03.192 00:31:03.192 ' 00:31:03.192 20:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:03.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.453 20:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.453 ************************************ 00:31:03.453 START TEST nvmf_multicontroller 00:31:03.453 ************************************ 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:03.453 * Looking for test storage... 00:31:03.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.453 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:03.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.454 --rc genhtml_branch_coverage=1 00:31:03.454 --rc genhtml_function_coverage=1 00:31:03.454 --rc genhtml_legend=1 00:31:03.454 --rc geninfo_all_blocks=1 00:31:03.454 --rc geninfo_unexecuted_blocks=1 00:31:03.454 00:31:03.454 ' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:03.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.454 --rc genhtml_branch_coverage=1 00:31:03.454 --rc genhtml_function_coverage=1 00:31:03.454 --rc genhtml_legend=1 00:31:03.454 --rc geninfo_all_blocks=1 00:31:03.454 --rc geninfo_unexecuted_blocks=1 00:31:03.454 00:31:03.454 ' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:03.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.454 --rc genhtml_branch_coverage=1 00:31:03.454 --rc genhtml_function_coverage=1 00:31:03.454 --rc genhtml_legend=1 00:31:03.454 --rc geninfo_all_blocks=1 00:31:03.454 --rc geninfo_unexecuted_blocks=1 00:31:03.454 00:31:03.454 ' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:03.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.454 --rc genhtml_branch_coverage=1 00:31:03.454 --rc genhtml_function_coverage=1 00:31:03.454 --rc genhtml_legend=1 00:31:03.454 --rc geninfo_all_blocks=1 00:31:03.454 --rc geninfo_unexecuted_blocks=1 00:31:03.454 00:31:03.454 ' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:03.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:31:03.454 20:58:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.743 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.743 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.743 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.743 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:06.744 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:06.744 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:06.744 Found net devices under 0000:84:00.0: cvl_0_0 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:06.744 Found net devices under 0000:84:00.1: cvl_0_1 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.744 20:58:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:31:06.744 00:31:06.744 --- 10.0.0.2 ping statistics --- 00:31:06.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.744 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:31:06.744 00:31:06.744 --- 10.0.0.1 ping statistics --- 00:31:06.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.744 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:06.744 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1797915 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1797915 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1797915 ']' 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:06.745 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 [2024-10-08 20:58:35.276735] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:06.745 [2024-10-08 20:58:35.276911] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.745 [2024-10-08 20:58:35.440207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:07.003 [2024-10-08 20:58:35.661633] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.003 [2024-10-08 20:58:35.661775] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.003 [2024-10-08 20:58:35.661812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.003 [2024-10-08 20:58:35.661843] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.003 [2024-10-08 20:58:35.661869] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.003 [2024-10-08 20:58:35.664034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.003 [2024-10-08 20:58:35.664140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.003 [2024-10-08 20:58:35.664144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 [2024-10-08 20:58:35.826501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 Malloc0 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 [2024-10-08 20:58:35.893958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 [2024-10-08 20:58:35.901804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 Malloc1 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1797985 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1797985 /var/tmp/bdevperf.sock 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1797985 ']' 00:31:07.261 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:07.262 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.262 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:07.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:07.262 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.262 20:58:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.827 NVMe0n1 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.827 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.085 1 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.085 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.085 request: 00:31:08.085 { 00:31:08.085 "name": "NVMe0", 00:31:08.086 "trtype": "tcp", 00:31:08.086 "traddr": "10.0.0.2", 00:31:08.086 "adrfam": "ipv4", 00:31:08.086 "trsvcid": "4420", 00:31:08.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.086 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:08.086 "hostaddr": "10.0.0.1", 00:31:08.086 "prchk_reftag": false, 00:31:08.086 "prchk_guard": false, 00:31:08.086 "hdgst": false, 00:31:08.086 "ddgst": false, 00:31:08.086 "allow_unrecognized_csi": false, 00:31:08.086 "method": "bdev_nvme_attach_controller", 00:31:08.086 "req_id": 1 00:31:08.086 } 00:31:08.086 Got JSON-RPC error response 00:31:08.086 response: 00:31:08.086 { 00:31:08.086 "code": -114, 00:31:08.086 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:08.086 } 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.086 request: 00:31:08.086 { 00:31:08.086 "name": "NVMe0", 00:31:08.086 "trtype": "tcp", 00:31:08.086 "traddr": "10.0.0.2", 00:31:08.086 "adrfam": "ipv4", 00:31:08.086 "trsvcid": "4420", 00:31:08.086 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:08.086 "hostaddr": "10.0.0.1", 00:31:08.086 "prchk_reftag": false, 00:31:08.086 "prchk_guard": false, 00:31:08.086 "hdgst": false, 00:31:08.086 "ddgst": false, 00:31:08.086 "allow_unrecognized_csi": false, 00:31:08.086 "method": "bdev_nvme_attach_controller", 00:31:08.086 "req_id": 1 00:31:08.086 } 00:31:08.086 Got JSON-RPC error response 00:31:08.086 response: 00:31:08.086 { 00:31:08.086 "code": -114, 00:31:08.086 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:08.086 } 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.086 request: 00:31:08.086 { 00:31:08.086 "name": "NVMe0", 00:31:08.086 "trtype": "tcp", 00:31:08.086 "traddr": "10.0.0.2", 00:31:08.086 "adrfam": "ipv4", 00:31:08.086 "trsvcid": "4420", 00:31:08.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.086 "hostaddr": "10.0.0.1", 00:31:08.086 "prchk_reftag": false, 00:31:08.086 "prchk_guard": false, 00:31:08.086 "hdgst": false, 00:31:08.086 "ddgst": false, 00:31:08.086 "multipath": "disable", 00:31:08.086 "allow_unrecognized_csi": false, 00:31:08.086 "method": "bdev_nvme_attach_controller", 00:31:08.086 "req_id": 1 00:31:08.086 } 00:31:08.086 Got JSON-RPC error response 00:31:08.086 response: 00:31:08.086 { 00:31:08.086 "code": -114, 00:31:08.086 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:08.086 } 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.086 request: 00:31:08.086 { 00:31:08.086 "name": "NVMe0", 00:31:08.086 "trtype": "tcp", 00:31:08.086 "traddr": "10.0.0.2", 00:31:08.086 "adrfam": "ipv4", 00:31:08.086 "trsvcid": "4420", 00:31:08.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.086 "hostaddr": "10.0.0.1", 00:31:08.086 "prchk_reftag": false, 00:31:08.086 "prchk_guard": false, 00:31:08.086 "hdgst": false, 00:31:08.086 "ddgst": false, 00:31:08.086 "multipath": "failover", 00:31:08.086 "allow_unrecognized_csi": false, 00:31:08.086 "method": "bdev_nvme_attach_controller", 00:31:08.086 "req_id": 1 00:31:08.086 } 00:31:08.086 Got JSON-RPC error response 00:31:08.086 response: 00:31:08.086 { 00:31:08.086 "code": -114, 00:31:08.086 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:08.086 } 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.086 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.344 NVMe0n1 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.344 20:58:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.344 00:31:08.344 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.344 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:08.344 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:08.344 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.344 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:08.601 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.601 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:08.601 20:58:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:09.533 { 00:31:09.533 "results": [ 00:31:09.533 { 00:31:09.533 "job": "NVMe0n1", 00:31:09.533 "core_mask": "0x1", 00:31:09.533 "workload": "write", 00:31:09.533 "status": "finished", 00:31:09.533 "queue_depth": 128, 00:31:09.533 "io_size": 4096, 00:31:09.533 "runtime": 1.006303, 00:31:09.533 "iops": 18501.385765519928, 00:31:09.533 "mibps": 72.27103814656222, 00:31:09.533 "io_failed": 0, 00:31:09.533 "io_timeout": 0, 00:31:09.533 "avg_latency_us": 6908.235641016459, 00:31:09.533 "min_latency_us": 2742.8029629629627, 00:31:09.533 "max_latency_us": 12330.477037037037 00:31:09.533 } 00:31:09.533 ], 00:31:09.533 "core_count": 1 00:31:09.533 } 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1797985 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1797985 ']' 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1797985 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:09.533 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1797985 00:31:09.790 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:09.790 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:09.790 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1797985' 00:31:09.790 killing process with pid 1797985 00:31:09.790 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1797985 00:31:09.790 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1797985 00:31:10.048 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.048 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:10.049 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:10.049 [2024-10-08 20:58:36.011681] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:10.049 [2024-10-08 20:58:36.011781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797985 ] 00:31:10.049 [2024-10-08 20:58:36.078782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.049 [2024-10-08 20:58:36.194545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.049 [2024-10-08 20:58:37.096153] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 90368063-ed1c-430e-8ae3-7ce4c72f5e34 already exists 00:31:10.049 [2024-10-08 20:58:37.096194] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:90368063-ed1c-430e-8ae3-7ce4c72f5e34 alias for bdev NVMe1n1 00:31:10.049 [2024-10-08 20:58:37.096225] bdev_nvme.c:4560:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:10.049 Running I/O for 1 seconds... 00:31:10.049 18490.00 IOPS, 72.23 MiB/s 00:31:10.049 Latency(us) 00:31:10.049 [2024-10-08T18:58:38.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.049 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:10.049 NVMe0n1 : 1.01 18501.39 72.27 0.00 0.00 6908.24 2742.80 12330.48 00:31:10.049 [2024-10-08T18:58:38.812Z] =================================================================================================================== 00:31:10.049 [2024-10-08T18:58:38.812Z] Total : 18501.39 72.27 0.00 0.00 6908.24 2742.80 12330.48 00:31:10.049 Received shutdown signal, test time was about 1.000000 seconds 00:31:10.049 00:31:10.049 Latency(us) 00:31:10.049 [2024-10-08T18:58:38.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.049 [2024-10-08T18:58:38.812Z] =================================================================================================================== 00:31:10.049 [2024-10-08T18:58:38.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:10.049 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.049 rmmod nvme_tcp 00:31:10.049 rmmod nvme_fabrics 00:31:10.049 rmmod nvme_keyring 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1797915 ']' 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1797915 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1797915 ']' 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1797915 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1797915 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1797915' 00:31:10.049 killing process with pid 1797915 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1797915 00:31:10.049 20:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1797915 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.616 20:58:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.523 20:58:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.523 00:31:12.523 real 0m9.278s 00:31:12.523 user 0m14.639s 00:31:12.523 sys 0m3.298s 00:31:12.523 20:58:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.780 20:58:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:12.780 ************************************ 00:31:12.780 END TEST nvmf_multicontroller 00:31:12.780 ************************************ 00:31:12.780 20:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:12.780 20:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:12.780 20:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.780 20:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.780 ************************************ 00:31:12.780 START TEST nvmf_aer 00:31:12.780 ************************************ 00:31:12.781 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:12.781 * Looking for test storage... 00:31:12.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.781 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:12.781 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:31:12.781 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.039 --rc genhtml_branch_coverage=1 00:31:13.039 --rc genhtml_function_coverage=1 00:31:13.039 --rc genhtml_legend=1 00:31:13.039 --rc geninfo_all_blocks=1 00:31:13.039 --rc geninfo_unexecuted_blocks=1 00:31:13.039 00:31:13.039 ' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.039 --rc genhtml_branch_coverage=1 00:31:13.039 --rc genhtml_function_coverage=1 00:31:13.039 --rc genhtml_legend=1 00:31:13.039 --rc geninfo_all_blocks=1 00:31:13.039 --rc geninfo_unexecuted_blocks=1 00:31:13.039 00:31:13.039 ' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.039 --rc genhtml_branch_coverage=1 00:31:13.039 --rc genhtml_function_coverage=1 00:31:13.039 --rc genhtml_legend=1 00:31:13.039 --rc geninfo_all_blocks=1 00:31:13.039 --rc geninfo_unexecuted_blocks=1 00:31:13.039 00:31:13.039 ' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.039 --rc genhtml_branch_coverage=1 00:31:13.039 --rc genhtml_function_coverage=1 00:31:13.039 --rc genhtml_legend=1 00:31:13.039 --rc geninfo_all_blocks=1 00:31:13.039 --rc geninfo_unexecuted_blocks=1 00:31:13.039 00:31:13.039 ' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.039 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:13.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.040 20:58:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:15.584 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:15.584 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.584 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:15.585 Found net devices under 0000:84:00.0: cvl_0_0 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:15.585 Found net devices under 0000:84:00.1: cvl_0_1 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.585 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:15.844 00:31:15.844 --- 10.0.0.2 ping statistics --- 00:31:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.844 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:31:15.844 00:31:15.844 --- 10.0.0.1 ping statistics --- 00:31:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.844 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:15.844 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1800424 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1800424 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1800424 ']' 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:15.845 20:58:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.845 [2024-10-08 20:58:44.584253] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:15.845 [2024-10-08 20:58:44.584431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.104 [2024-10-08 20:58:44.744990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.362 [2024-10-08 20:58:44.940641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.362 [2024-10-08 20:58:44.940712] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.362 [2024-10-08 20:58:44.940730] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.362 [2024-10-08 20:58:44.940746] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.362 [2024-10-08 20:58:44.940758] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.362 [2024-10-08 20:58:44.942687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.362 [2024-10-08 20:58:44.942718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.362 [2024-10-08 20:58:44.944672] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.362 [2024-10-08 20:58:44.944677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.362 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.363 [2024-10-08 20:58:45.125113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.621 Malloc0 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.621 [2024-10-08 20:58:45.177077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.621 [ 00:31:16.621 { 00:31:16.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:16.621 "subtype": "Discovery", 00:31:16.621 "listen_addresses": [], 00:31:16.621 "allow_any_host": true, 00:31:16.621 "hosts": [] 00:31:16.621 }, 00:31:16.621 { 00:31:16.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.621 "subtype": "NVMe", 00:31:16.621 "listen_addresses": [ 00:31:16.621 { 00:31:16.621 "trtype": "TCP", 00:31:16.621 "adrfam": "IPv4", 00:31:16.621 "traddr": "10.0.0.2", 00:31:16.621 "trsvcid": "4420" 00:31:16.621 } 00:31:16.621 ], 00:31:16.621 "allow_any_host": true, 00:31:16.621 "hosts": [], 00:31:16.621 "serial_number": "SPDK00000000000001", 00:31:16.621 "model_number": "SPDK bdev Controller", 00:31:16.621 "max_namespaces": 2, 00:31:16.621 "min_cntlid": 1, 00:31:16.621 "max_cntlid": 65519, 00:31:16.621 "namespaces": [ 00:31:16.621 { 00:31:16.621 "nsid": 1, 00:31:16.621 "bdev_name": "Malloc0", 00:31:16.621 "name": "Malloc0", 00:31:16.621 "nguid": "1E7FB4D842F1452D87616D16AB8FD40A", 00:31:16.621 "uuid": "1e7fb4d8-42f1-452d-8761-6d16ab8fd40a" 00:31:16.621 } 00:31:16.621 ] 00:31:16.621 } 00:31:16.621 ] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1800462 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:16.621 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.879 Malloc1 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.879 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.880 [ 00:31:16.880 { 00:31:16.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:16.880 "subtype": "Discovery", 00:31:16.880 "listen_addresses": [], 00:31:16.880 "allow_any_host": true, 00:31:16.880 "hosts": [] 00:31:16.880 }, 00:31:16.880 { 00:31:16.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.880 "subtype": "NVMe", 00:31:16.880 "listen_addresses": [ 00:31:16.880 { 00:31:16.880 "trtype": "TCP", 00:31:16.880 "adrfam": "IPv4", 00:31:16.880 "traddr": "10.0.0.2", 00:31:16.880 "trsvcid": "4420" 00:31:16.880 } 00:31:16.880 ], 00:31:16.880 "allow_any_host": true, 00:31:16.880 "hosts": [], 00:31:16.880 "serial_number": "SPDK00000000000001", 00:31:16.880 "model_number": "SPDK bdev Controller", 00:31:16.880 "max_namespaces": 2, 00:31:16.880 "min_cntlid": 1, 00:31:16.880 "max_cntlid": 65519, 00:31:16.880 "namespaces": [ 00:31:16.880 { 00:31:16.880 "nsid": 1, 00:31:16.880 "bdev_name": "Malloc0", 00:31:16.880 "name": "Malloc0", 00:31:16.880 "nguid": "1E7FB4D842F1452D87616D16AB8FD40A", 00:31:16.880 "uuid": "1e7fb4d8-42f1-452d-8761-6d16ab8fd40a" 00:31:16.880 }, 00:31:16.880 { 00:31:16.880 "nsid": 2, 00:31:16.880 "bdev_name": "Malloc1", 00:31:16.880 "name": "Malloc1", 00:31:16.880 "nguid": "737E36ADD95E4BF98D807D65E445F29D", 00:31:16.880 "uuid": "737e36ad-d95e-4bf9-8d80-7d65e445f29d" 00:31:16.880 } 00:31:16.880 ] 00:31:16.880 } 00:31:16.880 ] 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1800462 00:31:16.880 Asynchronous Event Request test 00:31:16.880 Attaching to 10.0.0.2 00:31:16.880 Attached to 10.0.0.2 00:31:16.880 Registering asynchronous event callbacks... 00:31:16.880 Starting namespace attribute notice tests for all controllers... 00:31:16.880 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:16.880 aer_cb - Changed Namespace 00:31:16.880 Cleaning up... 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.880 rmmod nvme_tcp 00:31:16.880 rmmod nvme_fabrics 00:31:16.880 rmmod nvme_keyring 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1800424 ']' 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1800424 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1800424 ']' 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1800424 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.880 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1800424 00:31:17.138 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:17.138 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:17.138 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1800424' 00:31:17.138 killing process with pid 1800424 00:31:17.138 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1800424 00:31:17.138 20:58:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1800424 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.396 20:58:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.942 00:31:19.942 real 0m6.780s 00:31:19.942 user 0m5.309s 00:31:19.942 sys 0m2.832s 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:19.942 ************************************ 00:31:19.942 END TEST nvmf_aer 00:31:19.942 ************************************ 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.942 ************************************ 00:31:19.942 START TEST nvmf_async_init 00:31:19.942 ************************************ 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:19.942 * Looking for test storage... 00:31:19.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.942 --rc genhtml_branch_coverage=1 00:31:19.942 --rc genhtml_function_coverage=1 00:31:19.942 --rc genhtml_legend=1 00:31:19.942 --rc geninfo_all_blocks=1 00:31:19.942 --rc geninfo_unexecuted_blocks=1 00:31:19.942 00:31:19.942 ' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.942 --rc genhtml_branch_coverage=1 00:31:19.942 --rc genhtml_function_coverage=1 00:31:19.942 --rc genhtml_legend=1 00:31:19.942 --rc geninfo_all_blocks=1 00:31:19.942 --rc geninfo_unexecuted_blocks=1 00:31:19.942 00:31:19.942 ' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.942 --rc genhtml_branch_coverage=1 00:31:19.942 --rc genhtml_function_coverage=1 00:31:19.942 --rc genhtml_legend=1 00:31:19.942 --rc geninfo_all_blocks=1 00:31:19.942 --rc geninfo_unexecuted_blocks=1 00:31:19.942 00:31:19.942 ' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.942 --rc genhtml_branch_coverage=1 00:31:19.942 --rc genhtml_function_coverage=1 00:31:19.942 --rc genhtml_legend=1 00:31:19.942 --rc geninfo_all_blocks=1 00:31:19.942 --rc geninfo_unexecuted_blocks=1 00:31:19.942 00:31:19.942 ' 00:31:19.942 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:19.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b6833f691d7841a18719b8a5f5e552aa 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.943 20:58:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:23.234 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.234 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:23.235 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:23.235 Found net devices under 0000:84:00.0: cvl_0_0 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:23.235 Found net devices under 0000:84:00.1: cvl_0_1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:31:23.235 00:31:23.235 --- 10.0.0.2 ping statistics --- 00:31:23.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.235 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:31:23.235 00:31:23.235 --- 10.0.0.1 ping statistics --- 00:31:23.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.235 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1802627 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1802627 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1802627 ']' 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.235 20:58:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.235 [2024-10-08 20:58:51.502416] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:23.235 [2024-10-08 20:58:51.502520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.235 [2024-10-08 20:58:51.617982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.235 [2024-10-08 20:58:51.833246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.235 [2024-10-08 20:58:51.833363] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.235 [2024-10-08 20:58:51.833401] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.235 [2024-10-08 20:58:51.833432] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.235 [2024-10-08 20:58:51.833457] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.235 [2024-10-08 20:58:51.834855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 [2024-10-08 20:58:52.675338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 null0 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b6833f691d7841a18719b8a5f5e552aa 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.173 [2024-10-08 20:58:52.731936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.173 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 nvme0n1 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 [ 00:31:24.432 { 00:31:24.432 "name": "nvme0n1", 00:31:24.432 "aliases": [ 00:31:24.432 "b6833f69-1d78-41a1-8719-b8a5f5e552aa" 00:31:24.432 ], 00:31:24.432 "product_name": "NVMe disk", 00:31:24.432 "block_size": 512, 00:31:24.432 "num_blocks": 2097152, 00:31:24.432 "uuid": "b6833f69-1d78-41a1-8719-b8a5f5e552aa", 00:31:24.432 "numa_id": 1, 00:31:24.432 "assigned_rate_limits": { 00:31:24.432 "rw_ios_per_sec": 0, 00:31:24.432 "rw_mbytes_per_sec": 0, 00:31:24.432 "r_mbytes_per_sec": 0, 00:31:24.432 "w_mbytes_per_sec": 0 00:31:24.432 }, 00:31:24.432 "claimed": false, 00:31:24.432 "zoned": false, 00:31:24.432 "supported_io_types": { 00:31:24.432 "read": true, 00:31:24.432 "write": true, 00:31:24.432 "unmap": false, 00:31:24.432 "flush": true, 00:31:24.432 "reset": true, 00:31:24.432 "nvme_admin": true, 00:31:24.432 "nvme_io": true, 00:31:24.432 "nvme_io_md": false, 00:31:24.432 "write_zeroes": true, 00:31:24.432 "zcopy": false, 00:31:24.432 "get_zone_info": false, 00:31:24.432 "zone_management": false, 00:31:24.432 "zone_append": false, 00:31:24.432 "compare": true, 00:31:24.432 "compare_and_write": true, 00:31:24.432 "abort": true, 00:31:24.432 "seek_hole": false, 00:31:24.432 "seek_data": false, 00:31:24.432 "copy": true, 00:31:24.432 "nvme_iov_md": false 00:31:24.432 }, 00:31:24.432 "memory_domains": [ 00:31:24.432 { 00:31:24.432 "dma_device_id": "system", 00:31:24.432 "dma_device_type": 1 00:31:24.432 } 00:31:24.432 ], 00:31:24.432 "driver_specific": { 00:31:24.432 "nvme": [ 00:31:24.432 { 00:31:24.432 "trid": { 00:31:24.432 "trtype": "TCP", 00:31:24.432 "adrfam": "IPv4", 00:31:24.432 "traddr": "10.0.0.2", 00:31:24.432 "trsvcid": "4420", 00:31:24.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:24.432 }, 00:31:24.432 "ctrlr_data": { 00:31:24.432 "cntlid": 1, 00:31:24.432 "vendor_id": "0x8086", 00:31:24.432 "model_number": "SPDK bdev Controller", 00:31:24.432 "serial_number": "00000000000000000000", 00:31:24.432 "firmware_revision": "25.01", 00:31:24.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.432 "oacs": { 00:31:24.432 "security": 0, 00:31:24.432 "format": 0, 00:31:24.432 "firmware": 0, 00:31:24.432 "ns_manage": 0 00:31:24.432 }, 00:31:24.432 "multi_ctrlr": true, 00:31:24.432 "ana_reporting": false 00:31:24.432 }, 00:31:24.432 "vs": { 00:31:24.432 "nvme_version": "1.3" 00:31:24.432 }, 00:31:24.432 "ns_data": { 00:31:24.432 "id": 1, 00:31:24.432 "can_share": true 00:31:24.432 } 00:31:24.432 } 00:31:24.432 ], 00:31:24.432 "mp_policy": "active_passive" 00:31:24.432 } 00:31:24.432 } 00:31:24.432 ] 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 20:58:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 [2024-10-08 20:58:53.001755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.432 [2024-10-08 20:58:53.001954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b69560 (9): Bad file descriptor 00:31:24.432 [2024-10-08 20:58:53.135015] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:24.432 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:24.432 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 [ 00:31:24.432 { 00:31:24.432 "name": "nvme0n1", 00:31:24.432 "aliases": [ 00:31:24.432 "b6833f69-1d78-41a1-8719-b8a5f5e552aa" 00:31:24.432 ], 00:31:24.432 "product_name": "NVMe disk", 00:31:24.432 "block_size": 512, 00:31:24.432 "num_blocks": 2097152, 00:31:24.432 "uuid": "b6833f69-1d78-41a1-8719-b8a5f5e552aa", 00:31:24.432 "numa_id": 1, 00:31:24.432 "assigned_rate_limits": { 00:31:24.432 "rw_ios_per_sec": 0, 00:31:24.432 "rw_mbytes_per_sec": 0, 00:31:24.432 "r_mbytes_per_sec": 0, 00:31:24.432 "w_mbytes_per_sec": 0 00:31:24.432 }, 00:31:24.432 "claimed": false, 00:31:24.432 "zoned": false, 00:31:24.432 "supported_io_types": { 00:31:24.432 "read": true, 00:31:24.432 "write": true, 00:31:24.432 "unmap": false, 00:31:24.432 "flush": true, 00:31:24.432 "reset": true, 00:31:24.432 "nvme_admin": true, 00:31:24.432 "nvme_io": true, 00:31:24.432 "nvme_io_md": false, 00:31:24.432 "write_zeroes": true, 00:31:24.432 "zcopy": false, 00:31:24.432 "get_zone_info": false, 00:31:24.432 "zone_management": false, 00:31:24.432 "zone_append": false, 00:31:24.432 "compare": true, 00:31:24.432 "compare_and_write": true, 00:31:24.432 "abort": true, 00:31:24.432 "seek_hole": false, 00:31:24.432 "seek_data": false, 00:31:24.432 "copy": true, 00:31:24.432 "nvme_iov_md": false 00:31:24.432 }, 00:31:24.432 "memory_domains": [ 00:31:24.432 { 00:31:24.432 "dma_device_id": "system", 00:31:24.432 "dma_device_type": 1 00:31:24.432 } 00:31:24.432 ], 00:31:24.432 "driver_specific": { 00:31:24.432 "nvme": [ 00:31:24.432 { 00:31:24.432 "trid": { 00:31:24.432 "trtype": "TCP", 00:31:24.432 "adrfam": "IPv4", 00:31:24.432 "traddr": "10.0.0.2", 00:31:24.432 "trsvcid": "4420", 00:31:24.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:24.432 }, 00:31:24.432 "ctrlr_data": { 00:31:24.432 "cntlid": 2, 00:31:24.432 "vendor_id": "0x8086", 00:31:24.432 "model_number": "SPDK bdev Controller", 00:31:24.432 "serial_number": "00000000000000000000", 00:31:24.432 "firmware_revision": "25.01", 00:31:24.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.432 "oacs": { 00:31:24.432 "security": 0, 00:31:24.432 "format": 0, 00:31:24.432 "firmware": 0, 00:31:24.432 "ns_manage": 0 00:31:24.432 }, 00:31:24.432 "multi_ctrlr": true, 00:31:24.432 "ana_reporting": false 00:31:24.432 }, 00:31:24.432 "vs": { 00:31:24.432 "nvme_version": "1.3" 00:31:24.432 }, 00:31:24.432 "ns_data": { 00:31:24.432 "id": 1, 00:31:24.432 "can_share": true 00:31:24.432 } 00:31:24.432 } 00:31:24.432 ], 00:31:24.432 "mp_policy": "active_passive" 00:31:24.432 } 00:31:24.432 } 00:31:24.432 ] 00:31:24.432 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.o7H1DAimRc 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.o7H1DAimRc 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.o7H1DAimRc 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.433 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 [2024-10-08 20:58:53.214794] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.692 [2024-10-08 20:58:53.215098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 [2024-10-08 20:58:53.238888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:24.692 nvme0n1 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 [ 00:31:24.692 { 00:31:24.692 "name": "nvme0n1", 00:31:24.692 "aliases": [ 00:31:24.692 "b6833f69-1d78-41a1-8719-b8a5f5e552aa" 00:31:24.692 ], 00:31:24.692 "product_name": "NVMe disk", 00:31:24.692 "block_size": 512, 00:31:24.692 "num_blocks": 2097152, 00:31:24.692 "uuid": "b6833f69-1d78-41a1-8719-b8a5f5e552aa", 00:31:24.692 "numa_id": 1, 00:31:24.692 "assigned_rate_limits": { 00:31:24.692 "rw_ios_per_sec": 0, 00:31:24.692 "rw_mbytes_per_sec": 0, 00:31:24.692 "r_mbytes_per_sec": 0, 00:31:24.692 "w_mbytes_per_sec": 0 00:31:24.692 }, 00:31:24.692 "claimed": false, 00:31:24.692 "zoned": false, 00:31:24.692 "supported_io_types": { 00:31:24.692 "read": true, 00:31:24.692 "write": true, 00:31:24.692 "unmap": false, 00:31:24.692 "flush": true, 00:31:24.692 "reset": true, 00:31:24.692 "nvme_admin": true, 00:31:24.692 "nvme_io": true, 00:31:24.692 "nvme_io_md": false, 00:31:24.692 "write_zeroes": true, 00:31:24.692 "zcopy": false, 00:31:24.692 "get_zone_info": false, 00:31:24.692 "zone_management": false, 00:31:24.692 "zone_append": false, 00:31:24.692 "compare": true, 00:31:24.692 "compare_and_write": true, 00:31:24.692 "abort": true, 00:31:24.692 "seek_hole": false, 00:31:24.692 "seek_data": false, 00:31:24.692 "copy": true, 00:31:24.692 "nvme_iov_md": false 00:31:24.692 }, 00:31:24.692 "memory_domains": [ 00:31:24.692 { 00:31:24.692 "dma_device_id": "system", 00:31:24.692 "dma_device_type": 1 00:31:24.692 } 00:31:24.692 ], 00:31:24.692 "driver_specific": { 00:31:24.692 "nvme": [ 00:31:24.692 { 00:31:24.692 "trid": { 00:31:24.692 "trtype": "TCP", 00:31:24.692 "adrfam": "IPv4", 00:31:24.692 "traddr": "10.0.0.2", 00:31:24.692 "trsvcid": "4421", 00:31:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:24.692 }, 00:31:24.692 "ctrlr_data": { 00:31:24.692 "cntlid": 3, 00:31:24.692 "vendor_id": "0x8086", 00:31:24.692 "model_number": "SPDK bdev Controller", 00:31:24.692 "serial_number": "00000000000000000000", 00:31:24.692 "firmware_revision": "25.01", 00:31:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.692 "oacs": { 00:31:24.692 "security": 0, 00:31:24.692 "format": 0, 00:31:24.692 "firmware": 0, 00:31:24.692 "ns_manage": 0 00:31:24.692 }, 00:31:24.692 "multi_ctrlr": true, 00:31:24.692 "ana_reporting": false 00:31:24.692 }, 00:31:24.692 "vs": { 00:31:24.692 "nvme_version": "1.3" 00:31:24.692 }, 00:31:24.692 "ns_data": { 00:31:24.692 "id": 1, 00:31:24.692 "can_share": true 00:31:24.692 } 00:31:24.692 } 00:31:24.692 ], 00:31:24.692 "mp_policy": "active_passive" 00:31:24.692 } 00:31:24.692 } 00:31:24.692 ] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.o7H1DAimRc 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.692 rmmod nvme_tcp 00:31:24.692 rmmod nvme_fabrics 00:31:24.692 rmmod nvme_keyring 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1802627 ']' 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1802627 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1802627 ']' 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1802627 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.692 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1802627 00:31:24.951 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:24.951 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:24.951 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1802627' 00:31:24.951 killing process with pid 1802627 00:31:24.951 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1802627 00:31:24.951 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1802627 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.210 20:58:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.745 00:31:27.745 real 0m7.748s 00:31:27.745 user 0m3.808s 00:31:27.745 sys 0m2.845s 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.745 ************************************ 00:31:27.745 END TEST nvmf_async_init 00:31:27.745 ************************************ 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.745 20:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.745 ************************************ 00:31:27.745 START TEST dma 00:31:27.745 ************************************ 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:27.745 * Looking for test storage... 00:31:27.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.745 --rc genhtml_branch_coverage=1 00:31:27.745 --rc genhtml_function_coverage=1 00:31:27.745 --rc genhtml_legend=1 00:31:27.745 --rc geninfo_all_blocks=1 00:31:27.745 --rc geninfo_unexecuted_blocks=1 00:31:27.745 00:31:27.745 ' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.745 --rc genhtml_branch_coverage=1 00:31:27.745 --rc genhtml_function_coverage=1 00:31:27.745 --rc genhtml_legend=1 00:31:27.745 --rc geninfo_all_blocks=1 00:31:27.745 --rc geninfo_unexecuted_blocks=1 00:31:27.745 00:31:27.745 ' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.745 --rc genhtml_branch_coverage=1 00:31:27.745 --rc genhtml_function_coverage=1 00:31:27.745 --rc genhtml_legend=1 00:31:27.745 --rc geninfo_all_blocks=1 00:31:27.745 --rc geninfo_unexecuted_blocks=1 00:31:27.745 00:31:27.745 ' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.745 --rc genhtml_branch_coverage=1 00:31:27.745 --rc genhtml_function_coverage=1 00:31:27.745 --rc genhtml_legend=1 00:31:27.745 --rc geninfo_all_blocks=1 00:31:27.745 --rc geninfo_unexecuted_blocks=1 00:31:27.745 00:31:27.745 ' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.745 20:58:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:27.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:27.746 00:31:27.746 real 0m0.228s 00:31:27.746 user 0m0.150s 00:31:27.746 sys 0m0.091s 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:27.746 ************************************ 00:31:27.746 END TEST dma 00:31:27.746 ************************************ 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.746 ************************************ 00:31:27.746 START TEST nvmf_identify 00:31:27.746 ************************************ 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.746 * Looking for test storage... 00:31:27.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:27.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.746 --rc genhtml_branch_coverage=1 00:31:27.746 --rc genhtml_function_coverage=1 00:31:27.746 --rc genhtml_legend=1 00:31:27.746 --rc geninfo_all_blocks=1 00:31:27.746 --rc geninfo_unexecuted_blocks=1 00:31:27.746 00:31:27.746 ' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:27.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.746 --rc genhtml_branch_coverage=1 00:31:27.746 --rc genhtml_function_coverage=1 00:31:27.746 --rc genhtml_legend=1 00:31:27.746 --rc geninfo_all_blocks=1 00:31:27.746 --rc geninfo_unexecuted_blocks=1 00:31:27.746 00:31:27.746 ' 00:31:27.746 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:27.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.746 --rc genhtml_branch_coverage=1 00:31:27.747 --rc genhtml_function_coverage=1 00:31:27.747 --rc genhtml_legend=1 00:31:27.747 --rc geninfo_all_blocks=1 00:31:27.747 --rc geninfo_unexecuted_blocks=1 00:31:27.747 00:31:27.747 ' 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:27.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.747 --rc genhtml_branch_coverage=1 00:31:27.747 --rc genhtml_function_coverage=1 00:31:27.747 --rc genhtml_legend=1 00:31:27.747 --rc geninfo_all_blocks=1 00:31:27.747 --rc geninfo_unexecuted_blocks=1 00:31:27.747 00:31:27.747 ' 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.747 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.007 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:28.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.008 20:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:31.302 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:31.302 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:31.302 Found net devices under 0000:84:00.0: cvl_0_0 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:31.302 Found net devices under 0000:84:00.1: cvl_0_1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.302 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:31.303 00:31:31.303 --- 10.0.0.2 ping statistics --- 00:31:31.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.303 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:31:31.303 00:31:31.303 --- 10.0.0.1 ping statistics --- 00:31:31.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.303 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1804958 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1804958 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1804958 ']' 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.303 20:58:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.303 [2024-10-08 20:58:59.609901] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:31.303 [2024-10-08 20:58:59.610003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.303 [2024-10-08 20:58:59.721452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.303 [2024-10-08 20:58:59.951128] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.303 [2024-10-08 20:58:59.951243] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.303 [2024-10-08 20:58:59.951281] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.303 [2024-10-08 20:58:59.951312] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.303 [2024-10-08 20:58:59.951338] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.303 [2024-10-08 20:58:59.955032] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.303 [2024-10-08 20:58:59.955132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.303 [2024-10-08 20:58:59.955224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.303 [2024-10-08 20:58:59.955227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 [2024-10-08 20:59:00.101356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 Malloc0 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 [2024-10-08 20:59:00.187617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.565 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 [ 00:31:31.565 { 00:31:31.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:31.565 "subtype": "Discovery", 00:31:31.565 "listen_addresses": [ 00:31:31.565 { 00:31:31.565 "trtype": "TCP", 00:31:31.565 "adrfam": "IPv4", 00:31:31.565 "traddr": "10.0.0.2", 00:31:31.565 "trsvcid": "4420" 00:31:31.565 } 00:31:31.565 ], 00:31:31.565 "allow_any_host": true, 00:31:31.565 "hosts": [] 00:31:31.565 }, 00:31:31.565 { 00:31:31.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.565 "subtype": "NVMe", 00:31:31.565 "listen_addresses": [ 00:31:31.565 { 00:31:31.565 "trtype": "TCP", 00:31:31.565 "adrfam": "IPv4", 00:31:31.565 "traddr": "10.0.0.2", 00:31:31.565 "trsvcid": "4420" 00:31:31.565 } 00:31:31.565 ], 00:31:31.565 "allow_any_host": true, 00:31:31.565 "hosts": [], 00:31:31.565 "serial_number": "SPDK00000000000001", 00:31:31.565 "model_number": "SPDK bdev Controller", 00:31:31.565 "max_namespaces": 32, 00:31:31.565 "min_cntlid": 1, 00:31:31.565 "max_cntlid": 65519, 00:31:31.566 "namespaces": [ 00:31:31.566 { 00:31:31.566 "nsid": 1, 00:31:31.566 "bdev_name": "Malloc0", 00:31:31.566 "name": "Malloc0", 00:31:31.566 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:31.566 "eui64": "ABCDEF0123456789", 00:31:31.566 "uuid": "69168a31-51c1-4d1a-943c-5411687ae343" 00:31:31.566 } 00:31:31.566 ] 00:31:31.566 } 00:31:31.566 ] 00:31:31.566 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.566 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:31.566 [2024-10-08 20:59:00.229050] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:31.566 [2024-10-08 20:59:00.229095] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805100 ] 00:31:31.566 [2024-10-08 20:59:00.266829] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:31.566 [2024-10-08 20:59:00.266902] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:31.566 [2024-10-08 20:59:00.266915] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:31.566 [2024-10-08 20:59:00.266937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:31.566 [2024-10-08 20:59:00.266954] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:31.566 [2024-10-08 20:59:00.267772] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:31.566 [2024-10-08 20:59:00.267833] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2143760 0 00:31:31.566 [2024-10-08 20:59:00.277663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:31.566 [2024-10-08 20:59:00.277696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:31.566 [2024-10-08 20:59:00.277708] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:31.566 [2024-10-08 20:59:00.277715] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:31.566 [2024-10-08 20:59:00.277762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.277776] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.277785] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.277804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:31.566 [2024-10-08 20:59:00.277836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.285673] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.285693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.285702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.285712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.285734] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:31.566 [2024-10-08 20:59:00.285748] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:31.566 [2024-10-08 20:59:00.285758] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:31.566 [2024-10-08 20:59:00.285781] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.285791] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.285799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.285812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.566 [2024-10-08 20:59:00.285853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.286044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.286058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.286066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.286084] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:31.566 [2024-10-08 20:59:00.286098] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:31.566 [2024-10-08 20:59:00.286112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.286140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.566 [2024-10-08 20:59:00.286164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.286343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.286356] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.286364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.286382] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:31.566 [2024-10-08 20:59:00.286397] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.286411] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286419] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286427] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.286439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.566 [2024-10-08 20:59:00.286462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.286591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.286604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.286612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.286630] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.286648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286669] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.286688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.566 [2024-10-08 20:59:00.286712] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.286838] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.286854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.286863] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.286870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.286880] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:31.566 [2024-10-08 20:59:00.286890] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.286905] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.287016] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:31.566 [2024-10-08 20:59:00.287025] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.287041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.287050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.287057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.287069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.566 [2024-10-08 20:59:00.287094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.566 [2024-10-08 20:59:00.287274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.566 [2024-10-08 20:59:00.287288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.566 [2024-10-08 20:59:00.287295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.287303] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.566 [2024-10-08 20:59:00.287313] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:31.566 [2024-10-08 20:59:00.287331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.287342] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.566 [2024-10-08 20:59:00.287349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.566 [2024-10-08 20:59:00.287361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.567 [2024-10-08 20:59:00.287383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.567 [2024-10-08 20:59:00.287512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.567 [2024-10-08 20:59:00.287525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.567 [2024-10-08 20:59:00.287533] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.567 [2024-10-08 20:59:00.287540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.567 [2024-10-08 20:59:00.287548] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:31.567 [2024-10-08 20:59:00.287558] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:31.567 [2024-10-08 20:59:00.287572] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:31.567 [2024-10-08 20:59:00.287589] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:31.567 [2024-10-08 20:59:00.287611] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.567 [2024-10-08 20:59:00.287621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.567 [2024-10-08 20:59:00.287633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.567 [2024-10-08 20:59:00.287676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.567 [2024-10-08 20:59:00.287903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:31.567 [2024-10-08 20:59:00.287919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:31.567 [2024-10-08 20:59:00.287927] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:31.567 [2024-10-08 20:59:00.287935] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2143760): datao=0, datal=4096, cccid=0 00:31:31.567 [2024-10-08 20:59:00.287944] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a3480) on tqpair(0x2143760): expected_datao=0, payload_size=4096 00:31:31.567 [2024-10-08 20:59:00.287953] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.567 [2024-10-08 20:59:00.287973] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:31.567 [2024-10-08 20:59:00.287983] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.328855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.832 [2024-10-08 20:59:00.328877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.832 [2024-10-08 20:59:00.328887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.328895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.832 [2024-10-08 20:59:00.328909] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:31.832 [2024-10-08 20:59:00.328919] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:31.832 [2024-10-08 20:59:00.328928] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:31.832 [2024-10-08 20:59:00.328938] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:31.832 [2024-10-08 20:59:00.328947] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:31.832 [2024-10-08 20:59:00.328956] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:31.832 [2024-10-08 20:59:00.328979] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:31.832 [2024-10-08 20:59:00.328996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.832 [2024-10-08 20:59:00.329025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:31.832 [2024-10-08 20:59:00.329053] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.832 [2024-10-08 20:59:00.329189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.832 [2024-10-08 20:59:00.329205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.832 [2024-10-08 20:59:00.329213] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329221] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.832 [2024-10-08 20:59:00.329234] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2143760) 00:31:31.832 [2024-10-08 20:59:00.329269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.832 [2024-10-08 20:59:00.329280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2143760) 00:31:31.832 [2024-10-08 20:59:00.329305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.832 [2024-10-08 20:59:00.329316] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.832 [2024-10-08 20:59:00.329330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329337] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.329347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.833 [2024-10-08 20:59:00.329358] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329372] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.329382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.833 [2024-10-08 20:59:00.329392] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:31.833 [2024-10-08 20:59:00.329415] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:31.833 [2024-10-08 20:59:00.329430] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.329451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.833 [2024-10-08 20:59:00.329477] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3480, cid 0, qid 0 00:31:31.833 [2024-10-08 20:59:00.329491] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3600, cid 1, qid 0 00:31:31.833 [2024-10-08 20:59:00.329500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3780, cid 2, qid 0 00:31:31.833 [2024-10-08 20:59:00.329509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.833 [2024-10-08 20:59:00.329517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3a80, cid 4, qid 0 00:31:31.833 [2024-10-08 20:59:00.329724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.833 [2024-10-08 20:59:00.329743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.833 [2024-10-08 20:59:00.329752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3a80) on tqpair=0x2143760 00:31:31.833 [2024-10-08 20:59:00.329771] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:31.833 [2024-10-08 20:59:00.329791] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:31.833 [2024-10-08 20:59:00.329813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.329825] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.329838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.833 [2024-10-08 20:59:00.329868] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3a80, cid 4, qid 0 00:31:31.833 [2024-10-08 20:59:00.330069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:31.833 [2024-10-08 20:59:00.330084] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:31.833 [2024-10-08 20:59:00.330093] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330100] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2143760): datao=0, datal=4096, cccid=4 00:31:31.833 [2024-10-08 20:59:00.330109] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a3a80) on tqpair(0x2143760): expected_datao=0, payload_size=4096 00:31:31.833 [2024-10-08 20:59:00.330117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330129] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330138] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.833 [2024-10-08 20:59:00.330176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.833 [2024-10-08 20:59:00.330184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3a80) on tqpair=0x2143760 00:31:31.833 [2024-10-08 20:59:00.330214] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:31.833 [2024-10-08 20:59:00.330256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.330282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.833 [2024-10-08 20:59:00.330295] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.330321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.833 [2024-10-08 20:59:00.330346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3a80, cid 4, qid 0 00:31:31.833 [2024-10-08 20:59:00.330359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3c00, cid 5, qid 0 00:31:31.833 [2024-10-08 20:59:00.330591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:31.833 [2024-10-08 20:59:00.330609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:31.833 [2024-10-08 20:59:00.330617] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2143760): datao=0, datal=1024, cccid=4 00:31:31.833 [2024-10-08 20:59:00.330633] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a3a80) on tqpair(0x2143760): expected_datao=0, payload_size=1024 00:31:31.833 [2024-10-08 20:59:00.330642] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330664] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330676] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.833 [2024-10-08 20:59:00.330696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.833 [2024-10-08 20:59:00.330703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.330711] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3c00) on tqpair=0x2143760 00:31:31.833 [2024-10-08 20:59:00.370825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.833 [2024-10-08 20:59:00.370846] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.833 [2024-10-08 20:59:00.370855] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.370863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3a80) on tqpair=0x2143760 00:31:31.833 [2024-10-08 20:59:00.370893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.370906] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.370919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.833 [2024-10-08 20:59:00.370954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3a80, cid 4, qid 0 00:31:31.833 [2024-10-08 20:59:00.371071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:31.833 [2024-10-08 20:59:00.371087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:31.833 [2024-10-08 20:59:00.371095] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.371102] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2143760): datao=0, datal=3072, cccid=4 00:31:31.833 [2024-10-08 20:59:00.371111] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a3a80) on tqpair(0x2143760): expected_datao=0, payload_size=3072 00:31:31.833 [2024-10-08 20:59:00.371119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.371143] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.371153] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.415668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.833 [2024-10-08 20:59:00.415686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.833 [2024-10-08 20:59:00.415694] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.415702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3a80) on tqpair=0x2143760 00:31:31.833 [2024-10-08 20:59:00.415718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.415728] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2143760) 00:31:31.833 [2024-10-08 20:59:00.415739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.833 [2024-10-08 20:59:00.415770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3a80, cid 4, qid 0 00:31:31.833 [2024-10-08 20:59:00.415926] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:31.833 [2024-10-08 20:59:00.415953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:31.833 [2024-10-08 20:59:00.415960] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.415967] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2143760): datao=0, datal=8, cccid=4 00:31:31.833 [2024-10-08 20:59:00.415974] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a3a80) on tqpair(0x2143760): expected_datao=0, payload_size=8 00:31:31.833 [2024-10-08 20:59:00.415981] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.415991] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.416013] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:31.833 [2024-10-08 20:59:00.459684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.834 [2024-10-08 20:59:00.459712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.834 [2024-10-08 20:59:00.459720] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.834 [2024-10-08 20:59:00.459727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3a80) on tqpair=0x2143760 00:31:31.834 ===================================================== 00:31:31.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:31.834 ===================================================== 00:31:31.834 Controller Capabilities/Features 00:31:31.834 ================================ 00:31:31.834 Vendor ID: 0000 00:31:31.834 Subsystem Vendor ID: 0000 00:31:31.834 Serial Number: .................... 00:31:31.834 Model Number: ........................................ 00:31:31.834 Firmware Version: 25.01 00:31:31.834 Recommended Arb Burst: 0 00:31:31.834 IEEE OUI Identifier: 00 00 00 00:31:31.834 Multi-path I/O 00:31:31.834 May have multiple subsystem ports: No 00:31:31.834 May have multiple controllers: No 00:31:31.834 Associated with SR-IOV VF: No 00:31:31.834 Max Data Transfer Size: 131072 00:31:31.834 Max Number of Namespaces: 0 00:31:31.834 Max Number of I/O Queues: 1024 00:31:31.834 NVMe Specification Version (VS): 1.3 00:31:31.834 NVMe Specification Version (Identify): 1.3 00:31:31.834 Maximum Queue Entries: 128 00:31:31.834 Contiguous Queues Required: Yes 00:31:31.834 Arbitration Mechanisms Supported 00:31:31.834 Weighted Round Robin: Not Supported 00:31:31.834 Vendor Specific: Not Supported 00:31:31.834 Reset Timeout: 15000 ms 00:31:31.834 Doorbell Stride: 4 bytes 00:31:31.834 NVM Subsystem Reset: Not Supported 00:31:31.834 Command Sets Supported 00:31:31.834 NVM Command Set: Supported 00:31:31.834 Boot Partition: Not Supported 00:31:31.834 Memory Page Size Minimum: 4096 bytes 00:31:31.834 Memory Page Size Maximum: 4096 bytes 00:31:31.834 Persistent Memory Region: Not Supported 00:31:31.834 Optional Asynchronous Events Supported 00:31:31.834 Namespace Attribute Notices: Not Supported 00:31:31.834 Firmware Activation Notices: Not Supported 00:31:31.834 ANA Change Notices: Not Supported 00:31:31.834 PLE Aggregate Log Change Notices: Not Supported 00:31:31.834 LBA Status Info Alert Notices: Not Supported 00:31:31.834 EGE Aggregate Log Change Notices: Not Supported 00:31:31.834 Normal NVM Subsystem Shutdown event: Not Supported 00:31:31.834 Zone Descriptor Change Notices: Not Supported 00:31:31.834 Discovery Log Change Notices: Supported 00:31:31.834 Controller Attributes 00:31:31.834 128-bit Host Identifier: Not Supported 00:31:31.834 Non-Operational Permissive Mode: Not Supported 00:31:31.834 NVM Sets: Not Supported 00:31:31.834 Read Recovery Levels: Not Supported 00:31:31.834 Endurance Groups: Not Supported 00:31:31.834 Predictable Latency Mode: Not Supported 00:31:31.834 Traffic Based Keep ALive: Not Supported 00:31:31.834 Namespace Granularity: Not Supported 00:31:31.834 SQ Associations: Not Supported 00:31:31.834 UUID List: Not Supported 00:31:31.834 Multi-Domain Subsystem: Not Supported 00:31:31.834 Fixed Capacity Management: Not Supported 00:31:31.834 Variable Capacity Management: Not Supported 00:31:31.834 Delete Endurance Group: Not Supported 00:31:31.834 Delete NVM Set: Not Supported 00:31:31.834 Extended LBA Formats Supported: Not Supported 00:31:31.834 Flexible Data Placement Supported: Not Supported 00:31:31.834 00:31:31.834 Controller Memory Buffer Support 00:31:31.834 ================================ 00:31:31.834 Supported: No 00:31:31.834 00:31:31.834 Persistent Memory Region Support 00:31:31.834 ================================ 00:31:31.834 Supported: No 00:31:31.834 00:31:31.834 Admin Command Set Attributes 00:31:31.834 ============================ 00:31:31.834 Security Send/Receive: Not Supported 00:31:31.834 Format NVM: Not Supported 00:31:31.834 Firmware Activate/Download: Not Supported 00:31:31.834 Namespace Management: Not Supported 00:31:31.834 Device Self-Test: Not Supported 00:31:31.834 Directives: Not Supported 00:31:31.834 NVMe-MI: Not Supported 00:31:31.834 Virtualization Management: Not Supported 00:31:31.834 Doorbell Buffer Config: Not Supported 00:31:31.834 Get LBA Status Capability: Not Supported 00:31:31.834 Command & Feature Lockdown Capability: Not Supported 00:31:31.834 Abort Command Limit: 1 00:31:31.834 Async Event Request Limit: 4 00:31:31.834 Number of Firmware Slots: N/A 00:31:31.834 Firmware Slot 1 Read-Only: N/A 00:31:31.834 Firmware Activation Without Reset: N/A 00:31:31.834 Multiple Update Detection Support: N/A 00:31:31.834 Firmware Update Granularity: No Information Provided 00:31:31.834 Per-Namespace SMART Log: No 00:31:31.834 Asymmetric Namespace Access Log Page: Not Supported 00:31:31.834 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:31.834 Command Effects Log Page: Not Supported 00:31:31.834 Get Log Page Extended Data: Supported 00:31:31.834 Telemetry Log Pages: Not Supported 00:31:31.834 Persistent Event Log Pages: Not Supported 00:31:31.834 Supported Log Pages Log Page: May Support 00:31:31.834 Commands Supported & Effects Log Page: Not Supported 00:31:31.834 Feature Identifiers & Effects Log Page:May Support 00:31:31.834 NVMe-MI Commands & Effects Log Page: May Support 00:31:31.834 Data Area 4 for Telemetry Log: Not Supported 00:31:31.834 Error Log Page Entries Supported: 128 00:31:31.834 Keep Alive: Not Supported 00:31:31.834 00:31:31.834 NVM Command Set Attributes 00:31:31.834 ========================== 00:31:31.834 Submission Queue Entry Size 00:31:31.834 Max: 1 00:31:31.834 Min: 1 00:31:31.834 Completion Queue Entry Size 00:31:31.834 Max: 1 00:31:31.834 Min: 1 00:31:31.834 Number of Namespaces: 0 00:31:31.834 Compare Command: Not Supported 00:31:31.834 Write Uncorrectable Command: Not Supported 00:31:31.834 Dataset Management Command: Not Supported 00:31:31.834 Write Zeroes Command: Not Supported 00:31:31.834 Set Features Save Field: Not Supported 00:31:31.834 Reservations: Not Supported 00:31:31.834 Timestamp: Not Supported 00:31:31.834 Copy: Not Supported 00:31:31.834 Volatile Write Cache: Not Present 00:31:31.834 Atomic Write Unit (Normal): 1 00:31:31.834 Atomic Write Unit (PFail): 1 00:31:31.834 Atomic Compare & Write Unit: 1 00:31:31.834 Fused Compare & Write: Supported 00:31:31.834 Scatter-Gather List 00:31:31.834 SGL Command Set: Supported 00:31:31.834 SGL Keyed: Supported 00:31:31.834 SGL Bit Bucket Descriptor: Not Supported 00:31:31.834 SGL Metadata Pointer: Not Supported 00:31:31.834 Oversized SGL: Not Supported 00:31:31.834 SGL Metadata Address: Not Supported 00:31:31.834 SGL Offset: Supported 00:31:31.834 Transport SGL Data Block: Not Supported 00:31:31.834 Replay Protected Memory Block: Not Supported 00:31:31.834 00:31:31.834 Firmware Slot Information 00:31:31.834 ========================= 00:31:31.834 Active slot: 0 00:31:31.834 00:31:31.834 00:31:31.834 Error Log 00:31:31.834 ========= 00:31:31.834 00:31:31.834 Active Namespaces 00:31:31.834 ================= 00:31:31.834 Discovery Log Page 00:31:31.834 ================== 00:31:31.834 Generation Counter: 2 00:31:31.834 Number of Records: 2 00:31:31.834 Record Format: 0 00:31:31.834 00:31:31.834 Discovery Log Entry 0 00:31:31.834 ---------------------- 00:31:31.834 Transport Type: 3 (TCP) 00:31:31.834 Address Family: 1 (IPv4) 00:31:31.834 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:31.834 Entry Flags: 00:31:31.834 Duplicate Returned Information: 1 00:31:31.834 Explicit Persistent Connection Support for Discovery: 1 00:31:31.834 Transport Requirements: 00:31:31.834 Secure Channel: Not Required 00:31:31.834 Port ID: 0 (0x0000) 00:31:31.834 Controller ID: 65535 (0xffff) 00:31:31.834 Admin Max SQ Size: 128 00:31:31.834 Transport Service Identifier: 4420 00:31:31.834 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:31.835 Transport Address: 10.0.0.2 00:31:31.835 Discovery Log Entry 1 00:31:31.835 ---------------------- 00:31:31.835 Transport Type: 3 (TCP) 00:31:31.835 Address Family: 1 (IPv4) 00:31:31.835 Subsystem Type: 2 (NVM Subsystem) 00:31:31.835 Entry Flags: 00:31:31.835 Duplicate Returned Information: 0 00:31:31.835 Explicit Persistent Connection Support for Discovery: 0 00:31:31.835 Transport Requirements: 00:31:31.835 Secure Channel: Not Required 00:31:31.835 Port ID: 0 (0x0000) 00:31:31.835 Controller ID: 65535 (0xffff) 00:31:31.835 Admin Max SQ Size: 128 00:31:31.835 Transport Service Identifier: 4420 00:31:31.835 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:31.835 Transport Address: 10.0.0.2 [2024-10-08 20:59:00.459837] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:31.835 [2024-10-08 20:59:00.459862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3480) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.459875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.835 [2024-10-08 20:59:00.459885] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3600) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.459893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.835 [2024-10-08 20:59:00.459901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3780) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.459909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.835 [2024-10-08 20:59:00.459918] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.459942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.835 [2024-10-08 20:59:00.459965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.459973] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.459980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.460005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.460032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.460179] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.460193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.460200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460206] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.460218] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460231] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.460242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.460268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.460369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.460381] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.460387] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.460401] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:31.835 [2024-10-08 20:59:00.460409] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:31.835 [2024-10-08 20:59:00.460429] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460445] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.460455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.460475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.460582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.460600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.460608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.460663] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460674] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460681] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.460692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.460714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.460804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.460818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.460825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460832] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.460848] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460858] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.460865] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.460875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.460897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.460994] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.461008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.461014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.461036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.461062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.461082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.461160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.461173] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.461179] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.461201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461210] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461216] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.461226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.461246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.461337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.461348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.461359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461366] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.461382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461398] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.461408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.835 [2024-10-08 20:59:00.461428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.835 [2024-10-08 20:59:00.461505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.835 [2024-10-08 20:59:00.461518] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.835 [2024-10-08 20:59:00.461525] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.835 [2024-10-08 20:59:00.461547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461556] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.835 [2024-10-08 20:59:00.461562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.835 [2024-10-08 20:59:00.461572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.461603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.461734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.461750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.461757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.461764] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.461780] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.461789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.461796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.461807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.461828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.461924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.461938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.461945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.461952] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.461969] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.461978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462031] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.462134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.462146] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.462153] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.462181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.462310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.462323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.462330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.462353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462400] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.462495] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.462508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.462515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462521] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.462536] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.462685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.462700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.462708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462715] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.462732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462781] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.462865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.462879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.462886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462893] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.462915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.462932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.462943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.462964] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.463045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.463057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.463079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.463103] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.463128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.463149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.463240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.463252] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.463274] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.463297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.463323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.463343] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.463453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.463467] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.463474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.463496] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463506] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463512] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.463522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.463543] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.463621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.463657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.836 [2024-10-08 20:59:00.463666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.836 [2024-10-08 20:59:00.463691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463707] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.836 [2024-10-08 20:59:00.463715] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.836 [2024-10-08 20:59:00.463726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.836 [2024-10-08 20:59:00.463749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.836 [2024-10-08 20:59:00.463834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.836 [2024-10-08 20:59:00.463848] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.463855] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.463862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.463879] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.463888] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.463895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.463905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.463927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464064] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464089] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464096] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.464106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.464127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464273] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.464283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.464303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464393] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464415] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464434] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.464444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.464464] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464549] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464556] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.464603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.464622] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464753] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464784] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.464811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.464833] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.464928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.464942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.464949] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464956] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.464972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.464988] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.465013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.465035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.465114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.465127] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.465134] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465140] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.465156] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.465186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.465208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.465302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.465314] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.465320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465327] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.465343] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465358] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.837 [2024-10-08 20:59:00.465368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.837 [2024-10-08 20:59:00.465387] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.837 [2024-10-08 20:59:00.465477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.837 [2024-10-08 20:59:00.465489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.837 [2024-10-08 20:59:00.465496] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.837 [2024-10-08 20:59:00.465502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.837 [2024-10-08 20:59:00.465519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465534] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.465544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.465564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.465663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.465677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.465684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.465724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.465751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.465773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.465862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.465875] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.465883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.465906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.465922] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.465947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.465972] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.466081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.466095] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.466102] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.466125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466134] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.466151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.466172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.466247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.466260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.466267] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466274] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.466290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466299] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.466316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.466336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.466427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.466439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.466445] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.466468] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466476] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.466493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.466512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.466584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.466595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.466601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.466623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.466638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2143760) 00:31:31.838 [2024-10-08 20:59:00.466648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.838 [2024-10-08 20:59:00.470693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a3900, cid 3, qid 0 00:31:31.838 [2024-10-08 20:59:00.470867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:31.838 [2024-10-08 20:59:00.470880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:31.838 [2024-10-08 20:59:00.470887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.470894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a3900) on tqpair=0x2143760 00:31:31.838 [2024-10-08 20:59:00.470907] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 10 milliseconds 00:31:31.838 00:31:31.838 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:31.838 [2024-10-08 20:59:00.527247] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:31.838 [2024-10-08 20:59:00.527347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805108 ] 00:31:31.838 [2024-10-08 20:59:00.578989] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:31.838 [2024-10-08 20:59:00.579040] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:31.838 [2024-10-08 20:59:00.579050] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:31.838 [2024-10-08 20:59:00.579066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:31.838 [2024-10-08 20:59:00.579079] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:31.838 [2024-10-08 20:59:00.579544] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:31.838 [2024-10-08 20:59:00.579607] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x188c760 0 00:31:31.838 [2024-10-08 20:59:00.585672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:31.838 [2024-10-08 20:59:00.585712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:31.838 [2024-10-08 20:59:00.585722] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:31.838 [2024-10-08 20:59:00.585728] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:31.838 [2024-10-08 20:59:00.585761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.585773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:31.838 [2024-10-08 20:59:00.585780] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:31.838 [2024-10-08 20:59:00.585794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:31.838 [2024-10-08 20:59:00.585822] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.593666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.593687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.593695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.593702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.593722] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:32.101 [2024-10-08 20:59:00.593734] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:32.101 [2024-10-08 20:59:00.593748] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:32.101 [2024-10-08 20:59:00.593765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.593775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.593781] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.593793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.593817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.593983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.593998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.594006] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.594021] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:32.101 [2024-10-08 20:59:00.594034] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:32.101 [2024-10-08 20:59:00.594047] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.594073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.594095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.594230] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.594243] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.594249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.594264] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:32.101 [2024-10-08 20:59:00.594277] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.594289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594302] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.594312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.594332] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.594479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.594493] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.594499] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594505] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.594513] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.594529] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.594559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.594579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.594681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.594695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.594702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594709] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.594717] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:32.101 [2024-10-08 20:59:00.594725] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.594738] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.594847] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:32.101 [2024-10-08 20:59:00.594854] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.594867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594874] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.594881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.594891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.594918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.595069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.595081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.595088] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.595095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.595103] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:32.101 [2024-10-08 20:59:00.595119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.595127] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.595134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.595144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.595164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.595258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.101 [2024-10-08 20:59:00.595272] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.101 [2024-10-08 20:59:00.595278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.595285] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.101 [2024-10-08 20:59:00.595292] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:32.101 [2024-10-08 20:59:00.595300] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:32.101 [2024-10-08 20:59:00.595316] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:32.101 [2024-10-08 20:59:00.595333] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:32.101 [2024-10-08 20:59:00.595347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.101 [2024-10-08 20:59:00.595355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.101 [2024-10-08 20:59:00.595366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.101 [2024-10-08 20:59:00.595386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.101 [2024-10-08 20:59:00.595574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.102 [2024-10-08 20:59:00.595588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.102 [2024-10-08 20:59:00.595595] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595601] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=4096, cccid=0 00:31:32.102 [2024-10-08 20:59:00.595608] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ec480) on tqpair(0x188c760): expected_datao=0, payload_size=4096 00:31:32.102 [2024-10-08 20:59:00.595614] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595624] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595645] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.102 [2024-10-08 20:59:00.595679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.102 [2024-10-08 20:59:00.595686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595693] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.102 [2024-10-08 20:59:00.595712] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:32.102 [2024-10-08 20:59:00.595720] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:32.102 [2024-10-08 20:59:00.595728] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:32.102 [2024-10-08 20:59:00.595735] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:32.102 [2024-10-08 20:59:00.595743] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:32.102 [2024-10-08 20:59:00.595751] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.595782] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.595796] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.595810] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.595821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:32.102 [2024-10-08 20:59:00.595843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.102 [2024-10-08 20:59:00.595958] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.102 [2024-10-08 20:59:00.595971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.102 [2024-10-08 20:59:00.595993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.102 [2024-10-08 20:59:00.596014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596022] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596028] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.102 [2024-10-08 20:59:00.596065] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.102 [2024-10-08 20:59:00.596097] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596110] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.102 [2024-10-08 20:59:00.596128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.102 [2024-10-08 20:59:00.596159] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596178] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.102 [2024-10-08 20:59:00.596242] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec480, cid 0, qid 0 00:31:32.102 [2024-10-08 20:59:00.596253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec600, cid 1, qid 0 00:31:32.102 [2024-10-08 20:59:00.596261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec780, cid 2, qid 0 00:31:32.102 [2024-10-08 20:59:00.596268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.102 [2024-10-08 20:59:00.596276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.102 [2024-10-08 20:59:00.596486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.102 [2024-10-08 20:59:00.596500] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.102 [2024-10-08 20:59:00.596507] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.102 [2024-10-08 20:59:00.596521] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:32.102 [2024-10-08 20:59:00.596529] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596550] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596565] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596576] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596590] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:32.102 [2024-10-08 20:59:00.596619] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.102 [2024-10-08 20:59:00.596752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.102 [2024-10-08 20:59:00.596767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.102 [2024-10-08 20:59:00.596773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.102 [2024-10-08 20:59:00.596844] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596862] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.596877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.596885] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.596895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.102 [2024-10-08 20:59:00.596924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.102 [2024-10-08 20:59:00.597071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.102 [2024-10-08 20:59:00.597086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.102 [2024-10-08 20:59:00.597092] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.597098] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=4096, cccid=4 00:31:32.102 [2024-10-08 20:59:00.597105] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eca80) on tqpair(0x188c760): expected_datao=0, payload_size=4096 00:31:32.102 [2024-10-08 20:59:00.597112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.597129] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.597152] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.638668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.102 [2024-10-08 20:59:00.638687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.102 [2024-10-08 20:59:00.638695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.638702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.102 [2024-10-08 20:59:00.638724] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:32.102 [2024-10-08 20:59:00.638740] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.638759] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:32.102 [2024-10-08 20:59:00.638773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.102 [2024-10-08 20:59:00.638788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.102 [2024-10-08 20:59:00.638800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.102 [2024-10-08 20:59:00.638825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.103 [2024-10-08 20:59:00.639006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.103 [2024-10-08 20:59:00.639018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.103 [2024-10-08 20:59:00.639025] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.639031] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=4096, cccid=4 00:31:32.103 [2024-10-08 20:59:00.639038] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eca80) on tqpair(0x188c760): expected_datao=0, payload_size=4096 00:31:32.103 [2024-10-08 20:59:00.639046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.639061] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.639070] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.683668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.683687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.683696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.683703] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.683725] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.683745] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.683760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.683768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.683780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.683803] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.103 [2024-10-08 20:59:00.683917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.103 [2024-10-08 20:59:00.683946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.103 [2024-10-08 20:59:00.683954] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.683960] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=4096, cccid=4 00:31:32.103 [2024-10-08 20:59:00.683967] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eca80) on tqpair(0x188c760): expected_datao=0, payload_size=4096 00:31:32.103 [2024-10-08 20:59:00.683974] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.683991] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.684000] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.724791] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.724809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.724816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.724823] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.724837] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724853] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724871] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724883] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724891] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724899] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724906] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:32.103 [2024-10-08 20:59:00.724914] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:32.103 [2024-10-08 20:59:00.724922] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:32.103 [2024-10-08 20:59:00.724941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.724949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.724960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.724987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.724994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.103 [2024-10-08 20:59:00.725032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.103 [2024-10-08 20:59:00.725043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecc00, cid 5, qid 0 00:31:32.103 [2024-10-08 20:59:00.725163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.725176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.725183] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.725199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.725208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.725214] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725221] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecc00) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.725236] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725244] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecc00, cid 5, qid 0 00:31:32.103 [2024-10-08 20:59:00.725359] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.725372] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.725378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecc00) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.725403] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725443] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecc00, cid 5, qid 0 00:31:32.103 [2024-10-08 20:59:00.725526] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.725539] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.725546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecc00) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.725567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725576] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725605] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecc00, cid 5, qid 0 00:31:32.103 [2024-10-08 20:59:00.725719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.103 [2024-10-08 20:59:00.725735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.103 [2024-10-08 20:59:00.725742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecc00) on tqpair=0x188c760 00:31:32.103 [2024-10-08 20:59:00.725774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725809] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725868] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.103 [2024-10-08 20:59:00.725876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x188c760) 00:31:32.103 [2024-10-08 20:59:00.725886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.103 [2024-10-08 20:59:00.725908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecc00, cid 5, qid 0 00:31:32.103 [2024-10-08 20:59:00.725920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18eca80, cid 4, qid 0 00:31:32.104 [2024-10-08 20:59:00.725928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecd80, cid 6, qid 0 00:31:32.104 [2024-10-08 20:59:00.725935] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecf00, cid 7, qid 0 00:31:32.104 [2024-10-08 20:59:00.726162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.104 [2024-10-08 20:59:00.726180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.104 [2024-10-08 20:59:00.726187] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726194] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=8192, cccid=5 00:31:32.104 [2024-10-08 20:59:00.726201] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ecc00) on tqpair(0x188c760): expected_datao=0, payload_size=8192 00:31:32.104 [2024-10-08 20:59:00.726208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726226] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726234] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726246] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.104 [2024-10-08 20:59:00.726255] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.104 [2024-10-08 20:59:00.726262] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726268] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=512, cccid=4 00:31:32.104 [2024-10-08 20:59:00.726275] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18eca80) on tqpair(0x188c760): expected_datao=0, payload_size=512 00:31:32.104 [2024-10-08 20:59:00.726281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726290] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726296] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.104 [2024-10-08 20:59:00.726312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.104 [2024-10-08 20:59:00.726318] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=512, cccid=6 00:31:32.104 [2024-10-08 20:59:00.726331] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ecd80) on tqpair(0x188c760): expected_datao=0, payload_size=512 00:31:32.104 [2024-10-08 20:59:00.726338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726346] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726352] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:32.104 [2024-10-08 20:59:00.726368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:32.104 [2024-10-08 20:59:00.726375] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726380] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x188c760): datao=0, datal=4096, cccid=7 00:31:32.104 [2024-10-08 20:59:00.726387] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ecf00) on tqpair(0x188c760): expected_datao=0, payload_size=4096 00:31:32.104 [2024-10-08 20:59:00.726394] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726403] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726409] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.104 [2024-10-08 20:59:00.726429] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.104 [2024-10-08 20:59:00.726435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecc00) on tqpair=0x188c760 00:31:32.104 [2024-10-08 20:59:00.726459] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.104 [2024-10-08 20:59:00.726470] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.104 [2024-10-08 20:59:00.726477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18eca80) on tqpair=0x188c760 00:31:32.104 [2024-10-08 20:59:00.726500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.104 [2024-10-08 20:59:00.726511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.104 [2024-10-08 20:59:00.726518] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726524] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecd80) on tqpair=0x188c760 00:31:32.104 [2024-10-08 20:59:00.726534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.104 [2024-10-08 20:59:00.726543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.104 [2024-10-08 20:59:00.726549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.104 [2024-10-08 20:59:00.726555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecf00) on tqpair=0x188c760 00:31:32.104 ===================================================== 00:31:32.104 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.104 ===================================================== 00:31:32.104 Controller Capabilities/Features 00:31:32.104 ================================ 00:31:32.104 Vendor ID: 8086 00:31:32.104 Subsystem Vendor ID: 8086 00:31:32.104 Serial Number: SPDK00000000000001 00:31:32.104 Model Number: SPDK bdev Controller 00:31:32.104 Firmware Version: 25.01 00:31:32.104 Recommended Arb Burst: 6 00:31:32.104 IEEE OUI Identifier: e4 d2 5c 00:31:32.104 Multi-path I/O 00:31:32.104 May have multiple subsystem ports: Yes 00:31:32.104 May have multiple controllers: Yes 00:31:32.104 Associated with SR-IOV VF: No 00:31:32.104 Max Data Transfer Size: 131072 00:31:32.104 Max Number of Namespaces: 32 00:31:32.104 Max Number of I/O Queues: 127 00:31:32.104 NVMe Specification Version (VS): 1.3 00:31:32.104 NVMe Specification Version (Identify): 1.3 00:31:32.104 Maximum Queue Entries: 128 00:31:32.104 Contiguous Queues Required: Yes 00:31:32.104 Arbitration Mechanisms Supported 00:31:32.104 Weighted Round Robin: Not Supported 00:31:32.104 Vendor Specific: Not Supported 00:31:32.104 Reset Timeout: 15000 ms 00:31:32.104 Doorbell Stride: 4 bytes 00:31:32.104 NVM Subsystem Reset: Not Supported 00:31:32.104 Command Sets Supported 00:31:32.104 NVM Command Set: Supported 00:31:32.104 Boot Partition: Not Supported 00:31:32.104 Memory Page Size Minimum: 4096 bytes 00:31:32.104 Memory Page Size Maximum: 4096 bytes 00:31:32.104 Persistent Memory Region: Not Supported 00:31:32.104 Optional Asynchronous Events Supported 00:31:32.104 Namespace Attribute Notices: Supported 00:31:32.104 Firmware Activation Notices: Not Supported 00:31:32.104 ANA Change Notices: Not Supported 00:31:32.104 PLE Aggregate Log Change Notices: Not Supported 00:31:32.104 LBA Status Info Alert Notices: Not Supported 00:31:32.104 EGE Aggregate Log Change Notices: Not Supported 00:31:32.104 Normal NVM Subsystem Shutdown event: Not Supported 00:31:32.104 Zone Descriptor Change Notices: Not Supported 00:31:32.104 Discovery Log Change Notices: Not Supported 00:31:32.104 Controller Attributes 00:31:32.104 128-bit Host Identifier: Supported 00:31:32.104 Non-Operational Permissive Mode: Not Supported 00:31:32.104 NVM Sets: Not Supported 00:31:32.104 Read Recovery Levels: Not Supported 00:31:32.104 Endurance Groups: Not Supported 00:31:32.104 Predictable Latency Mode: Not Supported 00:31:32.104 Traffic Based Keep ALive: Not Supported 00:31:32.104 Namespace Granularity: Not Supported 00:31:32.104 SQ Associations: Not Supported 00:31:32.104 UUID List: Not Supported 00:31:32.104 Multi-Domain Subsystem: Not Supported 00:31:32.104 Fixed Capacity Management: Not Supported 00:31:32.104 Variable Capacity Management: Not Supported 00:31:32.104 Delete Endurance Group: Not Supported 00:31:32.104 Delete NVM Set: Not Supported 00:31:32.104 Extended LBA Formats Supported: Not Supported 00:31:32.104 Flexible Data Placement Supported: Not Supported 00:31:32.104 00:31:32.104 Controller Memory Buffer Support 00:31:32.104 ================================ 00:31:32.104 Supported: No 00:31:32.104 00:31:32.104 Persistent Memory Region Support 00:31:32.104 ================================ 00:31:32.104 Supported: No 00:31:32.104 00:31:32.104 Admin Command Set Attributes 00:31:32.104 ============================ 00:31:32.104 Security Send/Receive: Not Supported 00:31:32.104 Format NVM: Not Supported 00:31:32.104 Firmware Activate/Download: Not Supported 00:31:32.104 Namespace Management: Not Supported 00:31:32.104 Device Self-Test: Not Supported 00:31:32.104 Directives: Not Supported 00:31:32.104 NVMe-MI: Not Supported 00:31:32.104 Virtualization Management: Not Supported 00:31:32.104 Doorbell Buffer Config: Not Supported 00:31:32.104 Get LBA Status Capability: Not Supported 00:31:32.104 Command & Feature Lockdown Capability: Not Supported 00:31:32.104 Abort Command Limit: 4 00:31:32.104 Async Event Request Limit: 4 00:31:32.104 Number of Firmware Slots: N/A 00:31:32.105 Firmware Slot 1 Read-Only: N/A 00:31:32.105 Firmware Activation Without Reset: N/A 00:31:32.105 Multiple Update Detection Support: N/A 00:31:32.105 Firmware Update Granularity: No Information Provided 00:31:32.105 Per-Namespace SMART Log: No 00:31:32.105 Asymmetric Namespace Access Log Page: Not Supported 00:31:32.105 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:32.105 Command Effects Log Page: Supported 00:31:32.105 Get Log Page Extended Data: Supported 00:31:32.105 Telemetry Log Pages: Not Supported 00:31:32.105 Persistent Event Log Pages: Not Supported 00:31:32.105 Supported Log Pages Log Page: May Support 00:31:32.105 Commands Supported & Effects Log Page: Not Supported 00:31:32.105 Feature Identifiers & Effects Log Page:May Support 00:31:32.105 NVMe-MI Commands & Effects Log Page: May Support 00:31:32.105 Data Area 4 for Telemetry Log: Not Supported 00:31:32.105 Error Log Page Entries Supported: 128 00:31:32.105 Keep Alive: Supported 00:31:32.105 Keep Alive Granularity: 10000 ms 00:31:32.105 00:31:32.105 NVM Command Set Attributes 00:31:32.105 ========================== 00:31:32.105 Submission Queue Entry Size 00:31:32.105 Max: 64 00:31:32.105 Min: 64 00:31:32.105 Completion Queue Entry Size 00:31:32.105 Max: 16 00:31:32.105 Min: 16 00:31:32.105 Number of Namespaces: 32 00:31:32.105 Compare Command: Supported 00:31:32.105 Write Uncorrectable Command: Not Supported 00:31:32.105 Dataset Management Command: Supported 00:31:32.105 Write Zeroes Command: Supported 00:31:32.105 Set Features Save Field: Not Supported 00:31:32.105 Reservations: Supported 00:31:32.105 Timestamp: Not Supported 00:31:32.105 Copy: Supported 00:31:32.105 Volatile Write Cache: Present 00:31:32.105 Atomic Write Unit (Normal): 1 00:31:32.105 Atomic Write Unit (PFail): 1 00:31:32.105 Atomic Compare & Write Unit: 1 00:31:32.105 Fused Compare & Write: Supported 00:31:32.105 Scatter-Gather List 00:31:32.105 SGL Command Set: Supported 00:31:32.105 SGL Keyed: Supported 00:31:32.105 SGL Bit Bucket Descriptor: Not Supported 00:31:32.105 SGL Metadata Pointer: Not Supported 00:31:32.105 Oversized SGL: Not Supported 00:31:32.105 SGL Metadata Address: Not Supported 00:31:32.105 SGL Offset: Supported 00:31:32.105 Transport SGL Data Block: Not Supported 00:31:32.105 Replay Protected Memory Block: Not Supported 00:31:32.105 00:31:32.105 Firmware Slot Information 00:31:32.105 ========================= 00:31:32.105 Active slot: 1 00:31:32.105 Slot 1 Firmware Revision: 25.01 00:31:32.105 00:31:32.105 00:31:32.105 Commands Supported and Effects 00:31:32.105 ============================== 00:31:32.105 Admin Commands 00:31:32.105 -------------- 00:31:32.105 Get Log Page (02h): Supported 00:31:32.105 Identify (06h): Supported 00:31:32.105 Abort (08h): Supported 00:31:32.105 Set Features (09h): Supported 00:31:32.105 Get Features (0Ah): Supported 00:31:32.105 Asynchronous Event Request (0Ch): Supported 00:31:32.105 Keep Alive (18h): Supported 00:31:32.105 I/O Commands 00:31:32.105 ------------ 00:31:32.105 Flush (00h): Supported LBA-Change 00:31:32.105 Write (01h): Supported LBA-Change 00:31:32.105 Read (02h): Supported 00:31:32.105 Compare (05h): Supported 00:31:32.105 Write Zeroes (08h): Supported LBA-Change 00:31:32.105 Dataset Management (09h): Supported LBA-Change 00:31:32.105 Copy (19h): Supported LBA-Change 00:31:32.105 00:31:32.105 Error Log 00:31:32.105 ========= 00:31:32.105 00:31:32.105 Arbitration 00:31:32.105 =========== 00:31:32.105 Arbitration Burst: 1 00:31:32.105 00:31:32.105 Power Management 00:31:32.105 ================ 00:31:32.105 Number of Power States: 1 00:31:32.105 Current Power State: Power State #0 00:31:32.105 Power State #0: 00:31:32.105 Max Power: 0.00 W 00:31:32.105 Non-Operational State: Operational 00:31:32.105 Entry Latency: Not Reported 00:31:32.105 Exit Latency: Not Reported 00:31:32.105 Relative Read Throughput: 0 00:31:32.105 Relative Read Latency: 0 00:31:32.105 Relative Write Throughput: 0 00:31:32.105 Relative Write Latency: 0 00:31:32.105 Idle Power: Not Reported 00:31:32.105 Active Power: Not Reported 00:31:32.105 Non-Operational Permissive Mode: Not Supported 00:31:32.105 00:31:32.105 Health Information 00:31:32.105 ================== 00:31:32.105 Critical Warnings: 00:31:32.105 Available Spare Space: OK 00:31:32.105 Temperature: OK 00:31:32.105 Device Reliability: OK 00:31:32.105 Read Only: No 00:31:32.105 Volatile Memory Backup: OK 00:31:32.105 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:32.105 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:32.105 Available Spare: 0% 00:31:32.105 Available Spare Threshold: 0% 00:31:32.105 Life Percentage Used:[2024-10-08 20:59:00.726690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.726702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x188c760) 00:31:32.105 [2024-10-08 20:59:00.726713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.105 [2024-10-08 20:59:00.726735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ecf00, cid 7, qid 0 00:31:32.105 [2024-10-08 20:59:00.726858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.105 [2024-10-08 20:59:00.726872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.105 [2024-10-08 20:59:00.726879] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.726886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ecf00) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.726926] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:32.105 [2024-10-08 20:59:00.726945] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec480) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.726956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.105 [2024-10-08 20:59:00.726980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec600) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.726988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.105 [2024-10-08 20:59:00.726996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec780) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.105 [2024-10-08 20:59:00.727011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.727018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.105 [2024-10-08 20:59:00.727030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.727038] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.727044] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.105 [2024-10-08 20:59:00.727054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.105 [2024-10-08 20:59:00.727075] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.105 [2024-10-08 20:59:00.727188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.105 [2024-10-08 20:59:00.727201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.105 [2024-10-08 20:59:00.727208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.727214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.105 [2024-10-08 20:59:00.727228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.727237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.105 [2024-10-08 20:59:00.727243] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.105 [2024-10-08 20:59:00.727253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.105 [2024-10-08 20:59:00.727278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.105 [2024-10-08 20:59:00.727374] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.105 [2024-10-08 20:59:00.727387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.106 [2024-10-08 20:59:00.727393] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.106 [2024-10-08 20:59:00.727407] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:32.106 [2024-10-08 20:59:00.727414] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:32.106 [2024-10-08 20:59:00.727430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.106 [2024-10-08 20:59:00.727454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.106 [2024-10-08 20:59:00.727473] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.106 [2024-10-08 20:59:00.727557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.106 [2024-10-08 20:59:00.727569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.106 [2024-10-08 20:59:00.727575] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727582] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.106 [2024-10-08 20:59:00.727598] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727606] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.727613] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.106 [2024-10-08 20:59:00.727622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.106 [2024-10-08 20:59:00.727641] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.106 [2024-10-08 20:59:00.731668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.106 [2024-10-08 20:59:00.731683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.106 [2024-10-08 20:59:00.731690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.731697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.106 [2024-10-08 20:59:00.731715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.731724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.731730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x188c760) 00:31:32.106 [2024-10-08 20:59:00.731741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.106 [2024-10-08 20:59:00.731763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec900, cid 3, qid 0 00:31:32.106 [2024-10-08 20:59:00.731879] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:32.106 [2024-10-08 20:59:00.731892] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:32.106 [2024-10-08 20:59:00.731899] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:32.106 [2024-10-08 20:59:00.731910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec900) on tqpair=0x188c760 00:31:32.106 [2024-10-08 20:59:00.731923] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:31:32.106 0% 00:31:32.106 Data Units Read: 0 00:31:32.106 Data Units Written: 0 00:31:32.106 Host Read Commands: 0 00:31:32.106 Host Write Commands: 0 00:31:32.106 Controller Busy Time: 0 minutes 00:31:32.106 Power Cycles: 0 00:31:32.106 Power On Hours: 0 hours 00:31:32.106 Unsafe Shutdowns: 0 00:31:32.106 Unrecoverable Media Errors: 0 00:31:32.106 Lifetime Error Log Entries: 0 00:31:32.106 Warning Temperature Time: 0 minutes 00:31:32.106 Critical Temperature Time: 0 minutes 00:31:32.106 00:31:32.106 Number of Queues 00:31:32.106 ================ 00:31:32.106 Number of I/O Submission Queues: 127 00:31:32.106 Number of I/O Completion Queues: 127 00:31:32.106 00:31:32.106 Active Namespaces 00:31:32.106 ================= 00:31:32.106 Namespace ID:1 00:31:32.106 Error Recovery Timeout: Unlimited 00:31:32.106 Command Set Identifier: NVM (00h) 00:31:32.106 Deallocate: Supported 00:31:32.106 Deallocated/Unwritten Error: Not Supported 00:31:32.106 Deallocated Read Value: Unknown 00:31:32.106 Deallocate in Write Zeroes: Not Supported 00:31:32.106 Deallocated Guard Field: 0xFFFF 00:31:32.106 Flush: Supported 00:31:32.106 Reservation: Supported 00:31:32.106 Namespace Sharing Capabilities: Multiple Controllers 00:31:32.106 Size (in LBAs): 131072 (0GiB) 00:31:32.106 Capacity (in LBAs): 131072 (0GiB) 00:31:32.106 Utilization (in LBAs): 131072 (0GiB) 00:31:32.106 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:32.106 EUI64: ABCDEF0123456789 00:31:32.106 UUID: 69168a31-51c1-4d1a-943c-5411687ae343 00:31:32.106 Thin Provisioning: Not Supported 00:31:32.106 Per-NS Atomic Units: Yes 00:31:32.106 Atomic Boundary Size (Normal): 0 00:31:32.106 Atomic Boundary Size (PFail): 0 00:31:32.106 Atomic Boundary Offset: 0 00:31:32.106 Maximum Single Source Range Length: 65535 00:31:32.106 Maximum Copy Length: 65535 00:31:32.106 Maximum Source Range Count: 1 00:31:32.106 NGUID/EUI64 Never Reused: No 00:31:32.106 Namespace Write Protected: No 00:31:32.106 Number of LBA Formats: 1 00:31:32.106 Current LBA Format: LBA Format #00 00:31:32.106 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:32.106 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.106 rmmod nvme_tcp 00:31:32.106 rmmod nvme_fabrics 00:31:32.106 rmmod nvme_keyring 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1804958 ']' 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1804958 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1804958 ']' 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1804958 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:32.106 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1804958 00:31:32.367 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:32.367 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:32.367 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1804958' 00:31:32.367 killing process with pid 1804958 00:31:32.367 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1804958 00:31:32.367 20:59:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1804958 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.628 20:59:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.583 20:59:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:34.583 00:31:34.583 real 0m6.986s 00:31:34.583 user 0m5.982s 00:31:34.583 sys 0m2.854s 00:31:34.583 20:59:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:34.583 20:59:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:34.583 ************************************ 00:31:34.583 END TEST nvmf_identify 00:31:34.583 ************************************ 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.855 ************************************ 00:31:34.855 START TEST nvmf_perf 00:31:34.855 ************************************ 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:34.855 * Looking for test storage... 00:31:34.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.855 --rc genhtml_branch_coverage=1 00:31:34.855 --rc genhtml_function_coverage=1 00:31:34.855 --rc genhtml_legend=1 00:31:34.855 --rc geninfo_all_blocks=1 00:31:34.855 --rc geninfo_unexecuted_blocks=1 00:31:34.855 00:31:34.855 ' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.855 --rc genhtml_branch_coverage=1 00:31:34.855 --rc genhtml_function_coverage=1 00:31:34.855 --rc genhtml_legend=1 00:31:34.855 --rc geninfo_all_blocks=1 00:31:34.855 --rc geninfo_unexecuted_blocks=1 00:31:34.855 00:31:34.855 ' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.855 --rc genhtml_branch_coverage=1 00:31:34.855 --rc genhtml_function_coverage=1 00:31:34.855 --rc genhtml_legend=1 00:31:34.855 --rc geninfo_all_blocks=1 00:31:34.855 --rc geninfo_unexecuted_blocks=1 00:31:34.855 00:31:34.855 ' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.855 --rc genhtml_branch_coverage=1 00:31:34.855 --rc genhtml_function_coverage=1 00:31:34.855 --rc genhtml_legend=1 00:31:34.855 --rc geninfo_all_blocks=1 00:31:34.855 --rc geninfo_unexecuted_blocks=1 00:31:34.855 00:31:34.855 ' 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:34.855 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.856 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:35.114 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.115 20:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.407 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:38.408 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:38.408 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:38.408 Found net devices under 0000:84:00.0: cvl_0_0 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:38.408 Found net devices under 0000:84:00.1: cvl_0_1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:31:38.408 00:31:38.408 --- 10.0.0.2 ping statistics --- 00:31:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.408 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:31:38.408 00:31:38.408 --- 10.0.0.1 ping statistics --- 00:31:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.408 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1807195 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1807195 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1807195 ']' 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.408 20:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:38.408 [2024-10-08 20:59:06.753241] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:38.408 [2024-10-08 20:59:06.753405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.408 [2024-10-08 20:59:06.878010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.408 [2024-10-08 20:59:07.019707] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.408 [2024-10-08 20:59:07.019781] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.408 [2024-10-08 20:59:07.019801] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.409 [2024-10-08 20:59:07.019822] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.409 [2024-10-08 20:59:07.019840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.409 [2024-10-08 20:59:07.022093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.409 [2024-10-08 20:59:07.022157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.409 [2024-10-08 20:59:07.022230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.409 [2024-10-08 20:59:07.022234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.409 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.409 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:38.409 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:38.409 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.409 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:38.667 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.667 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:38.667 20:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:41.959 20:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:41.959 20:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:42.218 20:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:31:42.218 20:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:42.477 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:42.477 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:31:42.477 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:42.477 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:42.477 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:42.736 [2024-10-08 20:59:11.417768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.736 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:43.304 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:43.304 20:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:43.873 20:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:43.873 20:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:44.133 20:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.703 [2024-10-08 20:59:13.291713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.703 20:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:45.274 20:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:31:45.274 20:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:31:45.274 20:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:45.274 20:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:31:46.656 Initializing NVMe Controllers 00:31:46.656 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:31:46.656 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:31:46.656 Initialization complete. Launching workers. 00:31:46.656 ======================================================== 00:31:46.656 Latency(us) 00:31:46.656 Device Information : IOPS MiB/s Average min max 00:31:46.656 PCIE (0000:82:00.0) NSID 1 from core 0: 84678.73 330.78 377.31 44.25 5100.68 00:31:46.656 ======================================================== 00:31:46.656 Total : 84678.73 330.78 377.31 44.25 5100.68 00:31:46.656 00:31:46.656 20:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.043 Initializing NVMe Controllers 00:31:48.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:48.043 Initialization complete. Launching workers. 00:31:48.043 ======================================================== 00:31:48.043 Latency(us) 00:31:48.043 Device Information : IOPS MiB/s Average min max 00:31:48.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.00 0.32 12434.44 136.53 44746.70 00:31:48.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19176.92 5204.47 55851.48 00:31:48.043 ======================================================== 00:31:48.043 Total : 136.00 0.53 15111.60 136.53 55851.48 00:31:48.043 00:31:48.043 20:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.983 Initializing NVMe Controllers 00:31:48.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:48.983 Initialization complete. Launching workers. 00:31:48.983 ======================================================== 00:31:48.983 Latency(us) 00:31:48.983 Device Information : IOPS MiB/s Average min max 00:31:48.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8429.71 32.93 3795.76 544.83 7638.38 00:31:48.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3855.23 15.06 8327.07 6014.46 16116.61 00:31:48.983 ======================================================== 00:31:48.983 Total : 12284.95 47.99 5217.76 544.83 16116.61 00:31:48.983 00:31:48.983 20:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:48.983 20:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:48.983 20:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.523 Initializing NVMe Controllers 00:31:51.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.523 Controller IO queue size 128, less than required. 00:31:51.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.523 Controller IO queue size 128, less than required. 00:31:51.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:51.523 Initialization complete. Launching workers. 00:31:51.523 ======================================================== 00:31:51.523 Latency(us) 00:31:51.523 Device Information : IOPS MiB/s Average min max 00:31:51.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1307.08 326.77 101181.94 64352.15 177507.04 00:31:51.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.81 146.70 227515.77 111393.39 350358.28 00:31:51.523 ======================================================== 00:31:51.523 Total : 1893.89 473.47 140325.75 64352.15 350358.28 00:31:51.523 00:31:51.523 20:59:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:51.782 No valid NVMe controllers or AIO or URING devices found 00:31:51.782 Initializing NVMe Controllers 00:31:51.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.782 Controller IO queue size 128, less than required. 00:31:51.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.782 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:51.782 Controller IO queue size 128, less than required. 00:31:51.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.782 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:51.782 WARNING: Some requested NVMe devices were skipped 00:31:51.782 20:59:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:54.319 Initializing NVMe Controllers 00:31:54.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.319 Controller IO queue size 128, less than required. 00:31:54.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:54.319 Controller IO queue size 128, less than required. 00:31:54.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:54.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:54.319 Initialization complete. Launching workers. 00:31:54.319 00:31:54.319 ==================== 00:31:54.319 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:54.319 TCP transport: 00:31:54.319 polls: 7900 00:31:54.319 idle_polls: 5450 00:31:54.319 sock_completions: 2450 00:31:54.319 nvme_completions: 4829 00:31:54.319 submitted_requests: 7230 00:31:54.319 queued_requests: 1 00:31:54.319 00:31:54.319 ==================== 00:31:54.319 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:54.319 TCP transport: 00:31:54.319 polls: 10984 00:31:54.319 idle_polls: 8490 00:31:54.319 sock_completions: 2494 00:31:54.319 nvme_completions: 4753 00:31:54.319 submitted_requests: 7044 00:31:54.319 queued_requests: 1 00:31:54.319 ======================================================== 00:31:54.319 Latency(us) 00:31:54.319 Device Information : IOPS MiB/s Average min max 00:31:54.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.09 301.52 109180.37 63517.33 155602.46 00:31:54.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1187.10 296.78 109852.85 55876.41 193287.60 00:31:54.319 ======================================================== 00:31:54.319 Total : 2393.19 598.30 109513.94 55876.41 193287.60 00:31:54.319 00:31:54.319 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:54.319 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.887 rmmod nvme_tcp 00:31:54.887 rmmod nvme_fabrics 00:31:54.887 rmmod nvme_keyring 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1807195 ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1807195 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1807195 ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1807195 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1807195 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1807195' 00:31:54.887 killing process with pid 1807195 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1807195 00:31:54.887 20:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1807195 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.792 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.793 20:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.705 00:31:58.705 real 0m23.936s 00:31:58.705 user 1m12.970s 00:31:58.705 sys 0m6.947s 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:58.705 ************************************ 00:31:58.705 END TEST nvmf_perf 00:31:58.705 ************************************ 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.705 ************************************ 00:31:58.705 START TEST nvmf_fio_host 00:31:58.705 ************************************ 00:31:58.705 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:58.966 * Looking for test storage... 00:31:58.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:58.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.966 --rc genhtml_branch_coverage=1 00:31:58.966 --rc genhtml_function_coverage=1 00:31:58.966 --rc genhtml_legend=1 00:31:58.966 --rc geninfo_all_blocks=1 00:31:58.966 --rc geninfo_unexecuted_blocks=1 00:31:58.966 00:31:58.966 ' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:58.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.966 --rc genhtml_branch_coverage=1 00:31:58.966 --rc genhtml_function_coverage=1 00:31:58.966 --rc genhtml_legend=1 00:31:58.966 --rc geninfo_all_blocks=1 00:31:58.966 --rc geninfo_unexecuted_blocks=1 00:31:58.966 00:31:58.966 ' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:58.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.966 --rc genhtml_branch_coverage=1 00:31:58.966 --rc genhtml_function_coverage=1 00:31:58.966 --rc genhtml_legend=1 00:31:58.966 --rc geninfo_all_blocks=1 00:31:58.966 --rc geninfo_unexecuted_blocks=1 00:31:58.966 00:31:58.966 ' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:58.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.966 --rc genhtml_branch_coverage=1 00:31:58.966 --rc genhtml_function_coverage=1 00:31:58.966 --rc genhtml_legend=1 00:31:58.966 --rc geninfo_all_blocks=1 00:31:58.966 --rc geninfo_unexecuted_blocks=1 00:31:58.966 00:31:58.966 ' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.966 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:58.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.967 20:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:02.257 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:02.257 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:02.257 Found net devices under 0000:84:00.0: cvl_0_0 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:02.257 Found net devices under 0000:84:00.1: cvl_0_1 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:02.257 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:02.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:32:02.258 00:32:02.258 --- 10.0.0.2 ping statistics --- 00:32:02.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.258 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:32:02.258 00:32:02.258 --- 10.0.0.1 ping statistics --- 00:32:02.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.258 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1811416 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1811416 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1811416 ']' 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.258 20:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.258 [2024-10-08 20:59:30.831961] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:02.258 [2024-10-08 20:59:30.832128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.258 [2024-10-08 20:59:30.987018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:02.518 [2024-10-08 20:59:31.180629] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.518 [2024-10-08 20:59:31.180746] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.518 [2024-10-08 20:59:31.180767] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.518 [2024-10-08 20:59:31.180784] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.518 [2024-10-08 20:59:31.180799] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.518 [2024-10-08 20:59:31.184144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.518 [2024-10-08 20:59:31.184250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.518 [2024-10-08 20:59:31.184353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:02.518 [2024-10-08 20:59:31.184356] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.455 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.455 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:32:03.455 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:03.714 [2024-10-08 20:59:32.473228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.974 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:03.974 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:03.974 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.974 20:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:04.543 Malloc1 00:32:04.543 20:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:04.803 20:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:05.371 20:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.631 [2024-10-08 20:59:34.198978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.631 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:06.212 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:06.213 20:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.470 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:06.470 fio-3.35 00:32:06.470 Starting 1 thread 00:32:09.002 00:32:09.002 test: (groupid=0, jobs=1): err= 0: pid=1812054: Tue Oct 8 20:59:37 2024 00:32:09.002 read: IOPS=8978, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2006msec) 00:32:09.002 slat (usec): min=2, max=123, avg= 3.04, stdev= 1.31 00:32:09.002 clat (usec): min=2343, max=13806, avg=7769.13, stdev=620.54 00:32:09.002 lat (usec): min=2368, max=13809, avg=7772.17, stdev=620.45 00:32:09.002 clat percentiles (usec): 00:32:09.002 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:32:09.002 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:32:09.002 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:32:09.002 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11469], 99.95th=[12518], 00:32:09.002 | 99.99th=[13698] 00:32:09.002 bw ( KiB/s): min=35264, max=36456, per=99.94%, avg=35890.00, stdev=491.39, samples=4 00:32:09.002 iops : min= 8816, max= 9114, avg=8972.50, stdev=122.85, samples=4 00:32:09.002 write: IOPS=9000, BW=35.2MiB/s (36.9MB/s)(70.5MiB/2006msec); 0 zone resets 00:32:09.002 slat (nsec): min=2751, max=103113, avg=3184.92, stdev=872.39 00:32:09.002 clat (usec): min=1125, max=12883, avg=6447.36, stdev=538.18 00:32:09.002 lat (usec): min=1132, max=12886, avg=6450.55, stdev=538.13 00:32:09.002 clat percentiles (usec): 00:32:09.002 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:32:09.002 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:32:09.002 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:32:09.002 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10683], 99.95th=[11731], 00:32:09.002 | 99.99th=[12649] 00:32:09.002 bw ( KiB/s): min=35760, max=36224, per=99.96%, avg=35988.00, stdev=193.49, samples=4 00:32:09.002 iops : min= 8940, max= 9056, avg=8997.00, stdev=48.37, samples=4 00:32:09.002 lat (msec) : 2=0.03%, 4=0.13%, 10=99.71%, 20=0.13% 00:32:09.002 cpu : usr=69.58%, sys=29.08%, ctx=62, majf=0, minf=31 00:32:09.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:09.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.002 issued rwts: total=18010,18056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.002 00:32:09.002 Run status group 0 (all jobs): 00:32:09.002 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2006-2006msec 00:32:09.002 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.5MiB (74.0MB), run=2006-2006msec 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:09.002 20:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:09.261 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:09.261 fio-3.35 00:32:09.261 Starting 1 thread 00:32:11.791 00:32:11.791 test: (groupid=0, jobs=1): err= 0: pid=1812390: Tue Oct 8 20:59:40 2024 00:32:11.791 read: IOPS=6788, BW=106MiB/s (111MB/s)(213MiB/2007msec) 00:32:11.791 slat (usec): min=3, max=317, avg= 6.01, stdev= 4.57 00:32:11.791 clat (usec): min=2874, max=26592, avg=11580.55, stdev=3845.91 00:32:11.791 lat (usec): min=2881, max=26601, avg=11586.56, stdev=3846.58 00:32:11.791 clat percentiles (usec): 00:32:11.791 | 1.00th=[ 5342], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 8291], 00:32:11.791 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11600], 00:32:11.791 | 70.00th=[12780], 80.00th=[14484], 90.00th=[17433], 95.00th=[19006], 00:32:11.791 | 99.00th=[22414], 99.50th=[23200], 99.90th=[25297], 99.95th=[25822], 00:32:11.791 | 99.99th=[26608] 00:32:11.791 bw ( KiB/s): min=41088, max=63712, per=49.85%, avg=54152.00, stdev=11331.14, samples=4 00:32:11.791 iops : min= 2568, max= 3982, avg=3384.50, stdev=708.20, samples=4 00:32:11.791 write: IOPS=4212, BW=65.8MiB/s (69.0MB/s)(110MiB/1678msec); 0 zone resets 00:32:11.791 slat (usec): min=39, max=326, avg=49.70, stdev=13.91 00:32:11.791 clat (usec): min=2983, max=21442, avg=12698.95, stdev=2054.94 00:32:11.791 lat (usec): min=3031, max=21482, avg=12748.65, stdev=2055.43 00:32:11.791 clat percentiles (usec): 00:32:11.791 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:32:11.791 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:32:11.791 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15401], 95.00th=[16057], 00:32:11.791 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20317], 99.95th=[20841], 00:32:11.791 | 99.99th=[21365] 00:32:11.791 bw ( KiB/s): min=41568, max=66944, per=83.48%, avg=56272.00, stdev=12570.11, samples=4 00:32:11.791 iops : min= 2598, max= 4184, avg=3517.00, stdev=785.63, samples=4 00:32:11.791 lat (msec) : 4=0.16%, 10=27.90%, 20=69.56%, 50=2.38% 00:32:11.791 cpu : usr=82.56%, sys=15.89%, ctx=34, majf=0, minf=60 00:32:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:32:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.791 issued rwts: total=13625,7069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.791 00:32:11.791 Run status group 0 (all jobs): 00:32:11.791 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=213MiB (223MB), run=2007-2007msec 00:32:11.791 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=110MiB (116MB), run=1678-1678msec 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.791 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.791 rmmod nvme_tcp 00:32:11.791 rmmod nvme_fabrics 00:32:12.050 rmmod nvme_keyring 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1811416 ']' 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1811416 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1811416 ']' 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1811416 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1811416 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1811416' 00:32:12.050 killing process with pid 1811416 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1811416 00:32:12.050 20:59:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1811416 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.309 20:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.925 00:32:14.925 real 0m15.702s 00:32:14.925 user 0m46.774s 00:32:14.925 sys 0m4.984s 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.925 ************************************ 00:32:14.925 END TEST nvmf_fio_host 00:32:14.925 ************************************ 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.925 ************************************ 00:32:14.925 START TEST nvmf_failover 00:32:14.925 ************************************ 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:14.925 * Looking for test storage... 00:32:14.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:14.925 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:14.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.926 --rc genhtml_branch_coverage=1 00:32:14.926 --rc genhtml_function_coverage=1 00:32:14.926 --rc genhtml_legend=1 00:32:14.926 --rc geninfo_all_blocks=1 00:32:14.926 --rc geninfo_unexecuted_blocks=1 00:32:14.926 00:32:14.926 ' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:14.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.926 --rc genhtml_branch_coverage=1 00:32:14.926 --rc genhtml_function_coverage=1 00:32:14.926 --rc genhtml_legend=1 00:32:14.926 --rc geninfo_all_blocks=1 00:32:14.926 --rc geninfo_unexecuted_blocks=1 00:32:14.926 00:32:14.926 ' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:14.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.926 --rc genhtml_branch_coverage=1 00:32:14.926 --rc genhtml_function_coverage=1 00:32:14.926 --rc genhtml_legend=1 00:32:14.926 --rc geninfo_all_blocks=1 00:32:14.926 --rc geninfo_unexecuted_blocks=1 00:32:14.926 00:32:14.926 ' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:14.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.926 --rc genhtml_branch_coverage=1 00:32:14.926 --rc genhtml_function_coverage=1 00:32:14.926 --rc genhtml_legend=1 00:32:14.926 --rc geninfo_all_blocks=1 00:32:14.926 --rc geninfo_unexecuted_blocks=1 00:32:14.926 00:32:14.926 ' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:14.926 20:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:18.216 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:18.216 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:18.216 Found net devices under 0000:84:00.0: cvl_0_0 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:18.216 Found net devices under 0000:84:00.1: cvl_0_1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:32:18.216 00:32:18.216 --- 10.0.0.2 ping statistics --- 00:32:18.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.216 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:32:18.216 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:32:18.216 00:32:18.216 --- 10.0.0.1 ping statistics --- 00:32:18.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.217 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1814736 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1814736 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1814736 ']' 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:18.217 20:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.217 [2024-10-08 20:59:46.767176] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:18.217 [2024-10-08 20:59:46.767359] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.217 [2024-10-08 20:59:46.927217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.476 [2024-10-08 20:59:47.124588] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.476 [2024-10-08 20:59:47.124733] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.476 [2024-10-08 20:59:47.124752] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.476 [2024-10-08 20:59:47.124775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.476 [2024-10-08 20:59:47.124787] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.476 [2024-10-08 20:59:47.126274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.476 [2024-10-08 20:59:47.126335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.476 [2024-10-08 20:59:47.126339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.736 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:19.306 [2024-10-08 20:59:47.871173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.306 20:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:19.564 Malloc0 00:32:19.564 20:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:19.822 20:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.389 20:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.647 [2024-10-08 20:59:49.200026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.647 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:20.906 [2024-10-08 20:59:49.524943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:20.906 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:21.165 [2024-10-08 20:59:49.850034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1815151 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1815151 /var/tmp/bdevperf.sock 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1815151 ']' 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:21.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.165 20:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.733 20:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.733 20:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:21.733 20:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:22.300 NVMe0n1 00:32:22.558 20:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:23.125 00:32:23.125 20:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1815363 00:32:23.125 20:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:23.125 20:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:24.061 20:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.628 [2024-10-08 20:59:53.150500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.628 [2024-10-08 20:59:53.150657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.628 [2024-10-08 20:59:53.150677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.628 [2024-10-08 20:59:53.150691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.628 [2024-10-08 20:59:53.150720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.629 [2024-10-08 20:59:53.150732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.629 [2024-10-08 20:59:53.150745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c3b90 is same with the state(6) to be set 00:32:24.629 20:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:27.917 20:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:27.917 00:32:27.917 20:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:28.484 20:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:31.778 20:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.778 [2024-10-08 21:00:00.286986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.778 21:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:32.715 21:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:32.974 [2024-10-08 21:00:01.633616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.633992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 [2024-10-08 21:00:01.634129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1689a40 is same with the state(6) to be set 00:32:32.974 21:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1815363 00:32:38.249 { 00:32:38.249 "results": [ 00:32:38.249 { 00:32:38.249 "job": "NVMe0n1", 00:32:38.249 "core_mask": "0x1", 00:32:38.249 "workload": "verify", 00:32:38.249 "status": "finished", 00:32:38.249 "verify_range": { 00:32:38.249 "start": 0, 00:32:38.249 "length": 16384 00:32:38.249 }, 00:32:38.249 "queue_depth": 128, 00:32:38.249 "io_size": 4096, 00:32:38.249 "runtime": 15.004189, 00:32:38.249 "iops": 8495.960694709991, 00:32:38.249 "mibps": 33.1873464637109, 00:32:38.249 "io_failed": 15941, 00:32:38.249 "io_timeout": 0, 00:32:38.249 "avg_latency_us": 13365.543328023736, 00:32:38.249 "min_latency_us": 521.8607407407408, 00:32:38.249 "max_latency_us": 21165.70074074074 00:32:38.249 } 00:32:38.249 ], 00:32:38.249 "core_count": 1 00:32:38.249 } 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1815151 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1815151 ']' 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1815151 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1815151 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1815151' 00:32:38.249 killing process with pid 1815151 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1815151 00:32:38.249 21:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1815151 00:32:38.517 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:38.517 [2024-10-08 20:59:49.920847] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:38.517 [2024-10-08 20:59:49.920949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815151 ] 00:32:38.517 [2024-10-08 20:59:49.986293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.517 [2024-10-08 20:59:50.102617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.517 Running I/O for 15 seconds... 00:32:38.517 8653.00 IOPS, 33.80 MiB/s [2024-10-08T19:00:07.280Z] [2024-10-08 20:59:53.152336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.517 [2024-10-08 20:59:53.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.152976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.152990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.517 [2024-10-08 20:59:53.153430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.517 [2024-10-08 20:59:53.153443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.153860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.153980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.153995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.154123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.518 [2024-10-08 20:59:53.154152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.518 [2024-10-08 20:59:53.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.518 [2024-10-08 20:59:53.154693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.154980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.154995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.519 [2024-10-08 20:59:53.155901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.519 [2024-10-08 20:59:53.155914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.155929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.520 [2024-10-08 20:59:53.155942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.155957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.520 [2024-10-08 20:59:53.155970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.155988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.520 [2024-10-08 20:59:53.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:53.156032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:53.156062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:53.156091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:53.156124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.520 [2024-10-08 20:59:53.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89960 len:8 PRP1 0x0 PRP2 0x0 00:32:38.520 [2024-10-08 20:59:53.156187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.520 [2024-10-08 20:59:53.156217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.520 [2024-10-08 20:59:53.156228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:32:38.520 [2024-10-08 20:59:53.156241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.520 [2024-10-08 20:59:53.156267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.520 [2024-10-08 20:59:53.156279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89976 len:8 PRP1 0x0 PRP2 0x0 00:32:38.520 [2024-10-08 20:59:53.156293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156356] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x222b4f0 was disconnected and freed. reset controller. 00:32:38.520 [2024-10-08 20:59:53.156376] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:38.520 [2024-10-08 20:59:53.156412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:53.156431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:53.156460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:53.156490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:53.156516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:53.156529] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.520 [2024-10-08 20:59:53.159777] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.520 [2024-10-08 20:59:53.159816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2208cc0 (9): Bad file descriptor 00:32:38.520 [2024-10-08 20:59:53.316143] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:38.520 8077.00 IOPS, 31.55 MiB/s [2024-10-08T19:00:07.283Z] 8311.33 IOPS, 32.47 MiB/s [2024-10-08T19:00:07.283Z] 8448.25 IOPS, 33.00 MiB/s [2024-10-08T19:00:07.283Z] 8505.00 IOPS, 33.22 MiB/s [2024-10-08T19:00:07.283Z] [2024-10-08 20:59:56.963858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:56.963926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.963976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:56.963992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:56.964021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.520 [2024-10-08 20:59:56.964048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2208cc0 is same with the state(6) to be set 00:32:38.520 [2024-10-08 20:59:56.964827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.964855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.964896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.964927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.964957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.964972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.964986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.520 [2024-10-08 20:59:56.965373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.520 [2024-10-08 20:59:56.965387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.965975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.521 [2024-10-08 20:59:56.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.521 [2024-10-08 20:59:56.966449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.522 [2024-10-08 20:59:56.966477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.966975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.966989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.522 [2024-10-08 20:59:56.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.522 [2024-10-08 20:59:56.967673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.522 [2024-10-08 20:59:56.967702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.522 [2024-10-08 20:59:56.967717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.967978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.967992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.523 [2024-10-08 20:59:56.968593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.523 [2024-10-08 20:59:56.968662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.523 [2024-10-08 20:59:56.968676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:32:38.523 [2024-10-08 20:59:56.968689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 20:59:56.968753] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2234810 was disconnected and freed. reset controller. 00:32:38.523 [2024-10-08 20:59:56.968772] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:38.523 [2024-10-08 20:59:56.968786] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.523 [2024-10-08 20:59:56.972049] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.523 [2024-10-08 20:59:56.972088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2208cc0 (9): Bad file descriptor 00:32:38.523 [2024-10-08 20:59:57.090148] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:38.523 8367.67 IOPS, 32.69 MiB/s [2024-10-08T19:00:07.286Z] 8426.43 IOPS, 32.92 MiB/s [2024-10-08T19:00:07.286Z] 8472.88 IOPS, 33.10 MiB/s [2024-10-08T19:00:07.286Z] 8470.22 IOPS, 33.09 MiB/s [2024-10-08T19:00:07.286Z] [2024-10-08 21:00:01.636111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.523 [2024-10-08 21:00:01.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.523 [2024-10-08 21:00:01.636366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.636982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.636995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.524 [2024-10-08 21:00:01.637520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.524 [2024-10-08 21:00:01.637549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.524 [2024-10-08 21:00:01.637578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.524 [2024-10-08 21:00:01.637606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.524 [2024-10-08 21:00:01.637621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.525 [2024-10-08 21:00:01.637635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.637971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.637986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.638000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.638028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.638057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.638086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.525 [2024-10-08 21:00:01.638115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.525 [2024-10-08 21:00:01.638263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.525 [2024-10-08 21:00:01.638300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.525 [2024-10-08 21:00:01.638327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.525 [2024-10-08 21:00:01.638354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2208cc0 is same with the state(6) to be set 00:32:38.525 [2024-10-08 21:00:01.638526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:32:38.525 [2024-10-08 21:00:01.638934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.525 [2024-10-08 21:00:01.638947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.525 [2024-10-08 21:00:01.638958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.525 [2024-10-08 21:00:01.638969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.638982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.638995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95088 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95104 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95120 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95128 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95136 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95152 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95160 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.639955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.639966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.639979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.639992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.640003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.640015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.640027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.640054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.640066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.640078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.640091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.640104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.640115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.640127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.640140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.640154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.640165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.526 [2024-10-08 21:00:01.640177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:32:38.526 [2024-10-08 21:00:01.640190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.526 [2024-10-08 21:00:01.640203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.526 [2024-10-08 21:00:01.640214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95960 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.640971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.640983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.640994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.527 [2024-10-08 21:00:01.641370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.527 [2024-10-08 21:00:01.641381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:32:38.527 [2024-10-08 21:00:01.641394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.527 [2024-10-08 21:00:01.641406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95216 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95224 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95248 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95256 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95264 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.641958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.641969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95272 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.641982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.641995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95280 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95296 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95304 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95312 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95320 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95328 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95336 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95344 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95352 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95360 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.528 [2024-10-08 21:00:01.642594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.528 [2024-10-08 21:00:01.642606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.528 [2024-10-08 21:00:01.642617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95376 len:8 PRP1 0x0 PRP2 0x0 00:32:38.528 [2024-10-08 21:00:01.642630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95384 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95392 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95400 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95408 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95416 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95424 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.642962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.642975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.642986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95432 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.643013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.643027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.643038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.643049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.643062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.643075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.643086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.643096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.643109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.643123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.643139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.643151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.643164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.643176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.643187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.643199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.643212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:32:38.529 [2024-10-08 21:00:01.649609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.529 [2024-10-08 21:00:01.649622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.529 [2024-10-08 21:00:01.649633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.529 [2024-10-08 21:00:01.649643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.649960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.649973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.649983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.649994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:38.530 [2024-10-08 21:00:01.650742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:38.530 [2024-10-08 21:00:01.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:32:38.530 [2024-10-08 21:00:01.650765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.530 [2024-10-08 21:00:01.650829] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2236e20 was disconnected and freed. reset controller. 00:32:38.530 [2024-10-08 21:00:01.650852] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:38.530 [2024-10-08 21:00:01.650868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.530 [2024-10-08 21:00:01.650923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2208cc0 (9): Bad file descriptor 00:32:38.530 [2024-10-08 21:00:01.654134] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.531 [2024-10-08 21:00:01.778137] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:38.531 8376.50 IOPS, 32.72 MiB/s [2024-10-08T19:00:07.294Z] 8423.36 IOPS, 32.90 MiB/s [2024-10-08T19:00:07.294Z] 8444.08 IOPS, 32.98 MiB/s [2024-10-08T19:00:07.294Z] 8461.15 IOPS, 33.05 MiB/s [2024-10-08T19:00:07.294Z] 8477.71 IOPS, 33.12 MiB/s 00:32:38.531 Latency(us) 00:32:38.531 [2024-10-08T19:00:07.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.531 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.531 Verification LBA range: start 0x0 length 0x4000 00:32:38.531 NVMe0n1 : 15.00 8495.96 33.19 1062.44 0.00 13365.54 521.86 21165.70 00:32:38.531 [2024-10-08T19:00:07.294Z] =================================================================================================================== 00:32:38.531 [2024-10-08T19:00:07.294Z] Total : 8495.96 33.19 1062.44 0.00 13365.54 521.86 21165.70 00:32:38.531 Received shutdown signal, test time was about 15.000000 seconds 00:32:38.531 00:32:38.531 Latency(us) 00:32:38.531 [2024-10-08T19:00:07.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.531 [2024-10-08T19:00:07.294Z] =================================================================================================================== 00:32:38.531 [2024-10-08T19:00:07.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1817234 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1817234 /var/tmp/bdevperf.sock 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1817234 ']' 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:38.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:38.531 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:39.099 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.099 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:39.099 21:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:39.357 [2024-10-08 21:00:08.041115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.357 21:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:39.923 [2024-10-08 21:00:08.410162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:39.923 21:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:40.490 NVMe0n1 00:32:40.490 21:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:40.749 00:32:40.749 21:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:41.315 00:32:41.315 21:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:41.315 21:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:41.881 21:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:42.140 21:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:45.424 21:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.424 21:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:45.682 21:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1818537 00:32:45.682 21:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1818537 00:32:45.682 21:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:47.056 { 00:32:47.056 "results": [ 00:32:47.056 { 00:32:47.056 "job": "NVMe0n1", 00:32:47.056 "core_mask": "0x1", 00:32:47.056 "workload": "verify", 00:32:47.056 "status": "finished", 00:32:47.056 "verify_range": { 00:32:47.056 "start": 0, 00:32:47.056 "length": 16384 00:32:47.056 }, 00:32:47.056 "queue_depth": 128, 00:32:47.056 "io_size": 4096, 00:32:47.056 "runtime": 1.008611, 00:32:47.056 "iops": 8671.331167318223, 00:32:47.056 "mibps": 33.87238737233681, 00:32:47.056 "io_failed": 0, 00:32:47.056 "io_timeout": 0, 00:32:47.056 "avg_latency_us": 14693.159710343776, 00:32:47.056 "min_latency_us": 3179.7096296296295, 00:32:47.056 "max_latency_us": 13107.2 00:32:47.056 } 00:32:47.056 ], 00:32:47.056 "core_count": 1 00:32:47.056 } 00:32:47.056 21:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.056 [2024-10-08 21:00:07.310946] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:47.056 [2024-10-08 21:00:07.311039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817234 ] 00:32:47.056 [2024-10-08 21:00:07.370467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.056 [2024-10-08 21:00:07.477916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.056 [2024-10-08 21:00:10.693357] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:47.056 [2024-10-08 21:00:10.693470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.056 [2024-10-08 21:00:10.693495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.056 [2024-10-08 21:00:10.693513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.056 [2024-10-08 21:00:10.693526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.056 [2024-10-08 21:00:10.693540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.056 [2024-10-08 21:00:10.693554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.056 [2024-10-08 21:00:10.693568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.056 [2024-10-08 21:00:10.693581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.056 [2024-10-08 21:00:10.693595] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.056 [2024-10-08 21:00:10.693673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.056 [2024-10-08 21:00:10.693708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149bcc0 (9): Bad file descriptor 00:32:47.056 [2024-10-08 21:00:10.698882] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:47.056 Running I/O for 1 seconds... 00:32:47.056 8618.00 IOPS, 33.66 MiB/s 00:32:47.056 Latency(us) 00:32:47.056 [2024-10-08T19:00:15.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.056 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.056 Verification LBA range: start 0x0 length 0x4000 00:32:47.056 NVMe0n1 : 1.01 8671.33 33.87 0.00 0.00 14693.16 3179.71 13107.20 00:32:47.056 [2024-10-08T19:00:15.819Z] =================================================================================================================== 00:32:47.056 [2024-10-08T19:00:15.819Z] Total : 8671.33 33.87 0.00 0.00 14693.16 3179.71 13107.20 00:32:47.056 21:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:47.056 21:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:47.625 21:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.192 21:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:48.192 21:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:48.758 21:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.017 21:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:52.310 21:00:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:52.311 21:00:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1817234 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1817234 ']' 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1817234 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1817234 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1817234' 00:32:52.570 killing process with pid 1817234 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1817234 00:32:52.570 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1817234 00:32:52.838 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:52.838 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:53.482 rmmod nvme_tcp 00:32:53.482 rmmod nvme_fabrics 00:32:53.482 rmmod nvme_keyring 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1814736 ']' 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1814736 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1814736 ']' 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1814736 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.482 21:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1814736 00:32:53.482 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:53.482 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:53.482 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1814736' 00:32:53.482 killing process with pid 1814736 00:32:53.482 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1814736 00:32:53.482 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1814736 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.742 21:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.280 21:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.280 00:32:56.280 real 0m41.319s 00:32:56.280 user 2m26.144s 00:32:56.280 sys 0m7.767s 00:32:56.280 21:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.280 21:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:56.280 ************************************ 00:32:56.281 END TEST nvmf_failover 00:32:56.281 ************************************ 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.281 ************************************ 00:32:56.281 START TEST nvmf_host_discovery 00:32:56.281 ************************************ 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:56.281 * Looking for test storage... 00:32:56.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:56.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.281 --rc genhtml_branch_coverage=1 00:32:56.281 --rc genhtml_function_coverage=1 00:32:56.281 --rc genhtml_legend=1 00:32:56.281 --rc geninfo_all_blocks=1 00:32:56.281 --rc geninfo_unexecuted_blocks=1 00:32:56.281 00:32:56.281 ' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:56.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.281 --rc genhtml_branch_coverage=1 00:32:56.281 --rc genhtml_function_coverage=1 00:32:56.281 --rc genhtml_legend=1 00:32:56.281 --rc geninfo_all_blocks=1 00:32:56.281 --rc geninfo_unexecuted_blocks=1 00:32:56.281 00:32:56.281 ' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:56.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.281 --rc genhtml_branch_coverage=1 00:32:56.281 --rc genhtml_function_coverage=1 00:32:56.281 --rc genhtml_legend=1 00:32:56.281 --rc geninfo_all_blocks=1 00:32:56.281 --rc geninfo_unexecuted_blocks=1 00:32:56.281 00:32:56.281 ' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:56.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.281 --rc genhtml_branch_coverage=1 00:32:56.281 --rc genhtml_function_coverage=1 00:32:56.281 --rc genhtml_legend=1 00:32:56.281 --rc geninfo_all_blocks=1 00:32:56.281 --rc geninfo_unexecuted_blocks=1 00:32:56.281 00:32:56.281 ' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:56.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.281 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:56.282 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:59.570 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:59.570 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:59.570 Found net devices under 0000:84:00.0: cvl_0_0 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:59.570 Found net devices under 0000:84:00.1: cvl_0_1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.570 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:32:59.571 00:32:59.571 --- 10.0.0.2 ping statistics --- 00:32:59.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.571 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:32:59.571 00:32:59.571 --- 10.0.0.1 ping statistics --- 00:32:59.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.571 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1821433 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1821433 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1821433 ']' 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.571 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.571 [2024-10-08 21:00:27.987756] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:59.571 [2024-10-08 21:00:27.987926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.571 [2024-10-08 21:00:28.147622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.571 [2024-10-08 21:00:28.327779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.571 [2024-10-08 21:00:28.327843] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.571 [2024-10-08 21:00:28.327860] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.571 [2024-10-08 21:00:28.327874] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.571 [2024-10-08 21:00:28.327886] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.571 [2024-10-08 21:00:28.329055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 [2024-10-08 21:00:29.454212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 [2024-10-08 21:00:29.462544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 null0 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 null1 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1821648 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1821648 /tmp/host.sock 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1821648 ']' 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:00.952 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.952 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.952 [2024-10-08 21:00:29.543606] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:00.952 [2024-10-08 21:00:29.543704] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821648 ] 00:33:00.952 [2024-10-08 21:00:29.650123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.211 [2024-10-08 21:00:29.870293] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:02.413 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:02.413 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.672 [2024-10-08 21:00:31.284520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.672 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:33:02.931 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:03.189 [2024-10-08 21:00:31.877067] bdev_nvme.c:7257:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:03.189 [2024-10-08 21:00:31.877138] bdev_nvme.c:7343:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:03.189 [2024-10-08 21:00:31.877197] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:03.448 [2024-10-08 21:00:31.963552] bdev_nvme.c:7186:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:03.448 [2024-10-08 21:00:32.069967] bdev_nvme.c:7076:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:03.448 [2024-10-08 21:00:32.070029] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:04.015 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.273 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:33:04.273 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:04.273 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:04.274 21:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:04.532 21:00:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.909 [2024-10-08 21:00:34.346504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:05.909 [2024-10-08 21:00:34.347538] bdev_nvme.c:7239:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:05.909 [2024-10-08 21:00:34.347626] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.909 [2024-10-08 21:00:34.475869] bdev_nvme.c:7181:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.909 [2024-10-08 21:00:34.535224] bdev_nvme.c:7076:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:05.909 [2024-10-08 21:00:34.535282] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:05.909 [2024-10-08 21:00:34.535306] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:05.909 21:00:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:06.851 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.111 [2024-10-08 21:00:35.688076] bdev_nvme.c:7239:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:07.111 [2024-10-08 21:00:35.688153] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.111 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.111 [2024-10-08 21:00:35.694443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.111 [2024-10-08 21:00:35.694519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.111 [2024-10-08 21:00:35.694561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.112 [2024-10-08 21:00:35.694597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.112 [2024-10-08 21:00:35.694632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.112 [2024-10-08 21:00:35.694684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.112 [2024-10-08 21:00:35.694720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.112 [2024-10-08 21:00:35.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.112 [2024-10-08 21:00:35.694751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.112 [2024-10-08 21:00:35.704423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.112 [2024-10-08 21:00:35.714496] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.714857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.714890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.714909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.714935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.715033] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.715077] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.715115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.715167] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 [2024-10-08 21:00:35.724649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.724940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.725010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.725051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.725105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.725157] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.725192] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.725225] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.725272] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 [2024-10-08 21:00:35.734797] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.735141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.735214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.735256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.735311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.735397] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.735441] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.735475] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.735555] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 [2024-10-08 21:00:35.744947] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.745271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.745340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.745379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.745435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.745486] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.745520] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.745551] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.745629] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 [2024-10-08 21:00:35.755088] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.755407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.755476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.755515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.755591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.755727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.755778] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.755813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.755860] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 [2024-10-08 21:00:35.765229] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:07.112 [2024-10-08 21:00:35.765548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.112 [2024-10-08 21:00:35.765618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6acd0 with addr=10.0.0.2, port=4420 00:33:07.112 [2024-10-08 21:00:35.765685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6acd0 is same with the state(6) to be set 00:33:07.112 [2024-10-08 21:00:35.765744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6acd0 (9): Bad file descriptor 00:33:07.112 [2024-10-08 21:00:35.765823] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.112 [2024-10-08 21:00:35.765866] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:07.112 [2024-10-08 21:00:35.765899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.112 [2024-10-08 21:00:35.765947] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.112 [2024-10-08 21:00:35.774842] bdev_nvme.c:7044:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:07.112 [2024-10-08 21:00:35.774911] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:07.112 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:07.371 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.372 21:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.372 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.632 21:00:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.570 [2024-10-08 21:00:37.231496] bdev_nvme.c:7257:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:08.570 [2024-10-08 21:00:37.231553] bdev_nvme.c:7343:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:08.570 [2024-10-08 21:00:37.231608] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:08.570 [2024-10-08 21:00:37.318964] bdev_nvme.c:7186:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:08.829 [2024-10-08 21:00:37.512085] bdev_nvme.c:7076:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:08.829 [2024-10-08 21:00:37.512172] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.829 request: 00:33:08.829 { 00:33:08.829 "name": "nvme", 00:33:08.829 "trtype": "tcp", 00:33:08.829 "traddr": "10.0.0.2", 00:33:08.829 "adrfam": "ipv4", 00:33:08.829 "trsvcid": "8009", 00:33:08.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:08.829 "wait_for_attach": true, 00:33:08.829 "method": "bdev_nvme_start_discovery", 00:33:08.829 "req_id": 1 00:33:08.829 } 00:33:08.829 Got JSON-RPC error response 00:33:08.829 response: 00:33:08.829 { 00:33:08.829 "code": -17, 00:33:08.829 "message": "File exists" 00:33:08.829 } 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:08.829 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.089 request: 00:33:09.089 { 00:33:09.089 "name": "nvme_second", 00:33:09.089 "trtype": "tcp", 00:33:09.089 "traddr": "10.0.0.2", 00:33:09.089 "adrfam": "ipv4", 00:33:09.089 "trsvcid": "8009", 00:33:09.089 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:09.089 "wait_for_attach": true, 00:33:09.089 "method": "bdev_nvme_start_discovery", 00:33:09.089 "req_id": 1 00:33:09.089 } 00:33:09.089 Got JSON-RPC error response 00:33:09.089 response: 00:33:09.089 { 00:33:09.089 "code": -17, 00:33:09.089 "message": "File exists" 00:33:09.089 } 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:09.089 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.090 21:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.470 [2024-10-08 21:00:38.804525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.470 [2024-10-08 21:00:38.804634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6ef00 with addr=10.0.0.2, port=8010 00:33:10.470 [2024-10-08 21:00:38.804724] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:10.470 [2024-10-08 21:00:38.804789] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:10.470 [2024-10-08 21:00:38.804836] bdev_nvme.c:7325:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:11.408 [2024-10-08 21:00:39.806935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.408 [2024-10-08 21:00:39.807024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6ef00 with addr=10.0.0.2, port=8010 00:33:11.408 [2024-10-08 21:00:39.807078] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:11.408 [2024-10-08 21:00:39.807111] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:11.408 [2024-10-08 21:00:39.807141] bdev_nvme.c:7325:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:12.345 [2024-10-08 21:00:40.809004] bdev_nvme.c:7300:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:12.345 request: 00:33:12.345 { 00:33:12.345 "name": "nvme_second", 00:33:12.345 "trtype": "tcp", 00:33:12.345 "traddr": "10.0.0.2", 00:33:12.345 "adrfam": "ipv4", 00:33:12.345 "trsvcid": "8010", 00:33:12.345 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.345 "wait_for_attach": false, 00:33:12.345 "attach_timeout_ms": 3000, 00:33:12.345 "method": "bdev_nvme_start_discovery", 00:33:12.345 "req_id": 1 00:33:12.345 } 00:33:12.345 Got JSON-RPC error response 00:33:12.345 response: 00:33:12.345 { 00:33:12.345 "code": -110, 00:33:12.345 "message": "Connection timed out" 00:33:12.345 } 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1821648 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:12.345 rmmod nvme_tcp 00:33:12.345 rmmod nvme_fabrics 00:33:12.345 rmmod nvme_keyring 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1821433 ']' 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1821433 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1821433 ']' 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1821433 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:12.345 21:00:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1821433 00:33:12.345 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:12.345 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:12.345 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1821433' 00:33:12.345 killing process with pid 1821433 00:33:12.345 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1821433 00:33:12.345 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1821433 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.914 21:00:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:14.821 00:33:14.821 real 0m18.926s 00:33:14.821 user 0m28.769s 00:33:14.821 sys 0m4.539s 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.821 ************************************ 00:33:14.821 END TEST nvmf_host_discovery 00:33:14.821 ************************************ 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:14.821 21:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.081 ************************************ 00:33:15.082 START TEST nvmf_host_multipath_status 00:33:15.082 ************************************ 00:33:15.082 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:15.082 * Looking for test storage... 00:33:15.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:15.082 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:15.082 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:33:15.082 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:15.341 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:15.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.342 --rc genhtml_branch_coverage=1 00:33:15.342 --rc genhtml_function_coverage=1 00:33:15.342 --rc genhtml_legend=1 00:33:15.342 --rc geninfo_all_blocks=1 00:33:15.342 --rc geninfo_unexecuted_blocks=1 00:33:15.342 00:33:15.342 ' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:15.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.342 --rc genhtml_branch_coverage=1 00:33:15.342 --rc genhtml_function_coverage=1 00:33:15.342 --rc genhtml_legend=1 00:33:15.342 --rc geninfo_all_blocks=1 00:33:15.342 --rc geninfo_unexecuted_blocks=1 00:33:15.342 00:33:15.342 ' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:15.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.342 --rc genhtml_branch_coverage=1 00:33:15.342 --rc genhtml_function_coverage=1 00:33:15.342 --rc genhtml_legend=1 00:33:15.342 --rc geninfo_all_blocks=1 00:33:15.342 --rc geninfo_unexecuted_blocks=1 00:33:15.342 00:33:15.342 ' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:15.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.342 --rc genhtml_branch_coverage=1 00:33:15.342 --rc genhtml_function_coverage=1 00:33:15.342 --rc genhtml_legend=1 00:33:15.342 --rc geninfo_all_blocks=1 00:33:15.342 --rc geninfo_unexecuted_blocks=1 00:33:15.342 00:33:15.342 ' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:15.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.342 21:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.651 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:18.652 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:18.652 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:18.652 Found net devices under 0000:84:00.0: cvl_0_0 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:18.652 Found net devices under 0000:84:00.1: cvl_0_1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.652 21:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:33:18.652 00:33:18.652 --- 10.0.0.2 ping statistics --- 00:33:18.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.652 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:33:18.652 00:33:18.652 --- 10.0.0.1 ping statistics --- 00:33:18.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.652 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:18.652 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1825154 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1825154 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1825154 ']' 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.653 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:18.653 [2024-10-08 21:00:47.209974] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:18.653 [2024-10-08 21:00:47.210149] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.653 [2024-10-08 21:00:47.353478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:18.912 [2024-10-08 21:00:47.572916] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.912 [2024-10-08 21:00:47.573034] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.912 [2024-10-08 21:00:47.573072] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.912 [2024-10-08 21:00:47.573103] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.912 [2024-10-08 21:00:47.573129] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.912 [2024-10-08 21:00:47.578700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.912 [2024-10-08 21:00:47.578715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1825154 00:33:19.173 21:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:19.432 [2024-10-08 21:00:48.113439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.432 21:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:20.008 Malloc0 00:33:20.277 21:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:20.848 21:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:21.419 21:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.989 [2024-10-08 21:00:50.579467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.989 21:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:22.561 [2024-10-08 21:00:51.177703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1825688 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1825688 /var/tmp/bdevperf.sock 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1825688 ']' 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:22.561 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.128 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:23.128 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:23.128 21:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:23.695 21:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:24.632 Nvme0n1 00:33:24.632 21:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:25.596 Nvme0n1 00:33:25.596 21:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:25.596 21:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:27.501 21:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:27.501 21:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:28.069 21:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.328 21:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:29.313 21:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:29.313 21:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:29.313 21:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.313 21:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.886 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.886 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:29.886 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.886 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:30.145 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:30.145 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:30.145 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.145 21:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.714 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.714 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.714 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.714 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.973 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.973 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.973 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.973 21:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.541 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.541 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:31.541 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.541 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:31.800 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.800 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:31.800 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:32.370 21:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:32.629 21:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:33.568 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:33.568 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:33.568 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.568 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:34.138 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.138 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:34.138 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.138 21:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:34.711 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.711 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:34.711 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.711 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:35.281 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.281 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:35.281 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.281 21:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:35.539 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.539 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:35.539 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.540 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.107 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.107 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.107 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.107 21:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.366 21:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.367 21:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:36.367 21:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:36.933 21:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:37.500 21:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:38.439 21:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:38.439 21:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:38.439 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.439 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:39.007 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.007 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:39.007 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.007 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.266 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.266 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.266 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.266 21:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.832 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.832 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.832 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.832 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:40.090 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.090 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:40.090 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.090 21:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.658 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.658 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.658 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.658 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.917 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.917 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:40.917 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:41.177 21:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:41.744 21:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:42.683 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:42.683 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:42.683 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.683 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.942 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.942 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:43.201 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.201 21:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.460 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.460 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.460 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.460 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.719 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.719 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.719 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.719 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.285 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.285 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.285 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.285 21:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.542 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.542 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:44.542 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.542 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:45.107 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.107 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:45.108 21:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:45.366 21:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:45.935 21:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:46.873 21:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:46.873 21:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:46.873 21:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:46.873 21:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.809 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:47.809 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:47.809 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.809 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.376 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.376 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.376 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.376 21:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.635 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.635 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.635 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.635 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.202 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.202 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:49.202 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.202 21:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.461 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.461 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:49.461 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.462 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:50.043 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.043 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:50.043 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:50.303 21:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:50.870 21:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:51.808 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:51.808 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:51.808 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.808 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:52.376 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.376 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:52.376 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.376 21:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:52.946 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.946 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:52.946 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.946 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:53.205 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.205 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:53.205 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.205 21:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.464 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.464 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:53.464 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.464 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.031 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.031 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:54.031 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.031 21:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:54.599 21:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.599 21:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:55.167 21:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:55.167 21:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:55.426 21:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:55.999 21:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:56.935 21:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:56.935 21:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:56.935 21:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.935 21:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.504 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.504 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:57.505 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.505 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:57.763 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.763 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:57.763 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.763 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:58.365 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.365 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:58.365 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.365 21:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:58.649 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.649 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:58.649 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.649 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.216 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.216 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.216 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.216 21:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.789 21:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.789 21:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:59.789 21:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:00.049 21:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:00.617 21:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:01.554 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:01.554 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:01.554 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.554 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.813 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:01.813 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:01.813 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.813 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:02.382 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.382 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:02.382 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.382 21:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:02.951 21:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.951 21:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:02.951 21:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.951 21:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:03.520 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.520 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:03.520 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.520 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:04.088 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.088 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:04.088 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.088 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:04.347 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.347 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:04.347 21:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:04.605 21:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:05.173 21:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:06.112 21:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:06.112 21:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:06.112 21:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.112 21:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:06.680 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:06.680 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:06.680 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.680 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:06.938 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:06.938 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:06.938 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.938 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:07.196 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.196 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:07.196 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.196 21:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:07.763 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.763 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:07.763 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.763 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:08.022 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.022 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:08.022 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.022 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:08.281 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.281 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:08.281 21:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:08.849 21:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:09.108 21:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:10.485 21:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:10.485 21:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:10.485 21:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.485 21:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:10.485 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.485 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:10.485 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.485 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.053 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:11.053 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.053 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.053 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:11.313 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.313 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:11.313 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.313 21:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:11.880 21:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.880 21:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:11.880 21:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.880 21:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.448 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.448 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:12.448 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.448 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:13.014 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1825688 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1825688 ']' 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1825688 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825688 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825688' 00:34:13.015 killing process with pid 1825688 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1825688 00:34:13.015 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1825688 00:34:13.015 { 00:34:13.015 "results": [ 00:34:13.015 { 00:34:13.015 "job": "Nvme0n1", 00:34:13.015 "core_mask": "0x4", 00:34:13.015 "workload": "verify", 00:34:13.015 "status": "terminated", 00:34:13.015 "verify_range": { 00:34:13.015 "start": 0, 00:34:13.015 "length": 16384 00:34:13.015 }, 00:34:13.015 "queue_depth": 128, 00:34:13.015 "io_size": 4096, 00:34:13.015 "runtime": 47.102568, 00:34:13.015 "iops": 4239.301772251568, 00:34:13.015 "mibps": 16.559772547857687, 00:34:13.015 "io_failed": 0, 00:34:13.015 "io_timeout": 0, 00:34:13.015 "avg_latency_us": 30140.33051253716, 00:34:13.015 "min_latency_us": 276.1007407407407, 00:34:13.015 "max_latency_us": 6114363.164444445 00:34:13.015 } 00:34:13.015 ], 00:34:13.015 "core_count": 1 00:34:13.015 } 00:34:13.285 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1825688 00:34:13.285 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:13.285 [2024-10-08 21:00:51.269076] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:34:13.285 [2024-10-08 21:00:51.269179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825688 ] 00:34:13.285 [2024-10-08 21:00:51.336953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.285 [2024-10-08 21:00:51.463234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.285 Running I/O for 90 seconds... 00:34:13.285 4371.00 IOPS, 17.07 MiB/s [2024-10-08T19:01:42.048Z] 4429.50 IOPS, 17.30 MiB/s [2024-10-08T19:01:42.048Z] 4457.67 IOPS, 17.41 MiB/s [2024-10-08T19:01:42.048Z] 4495.75 IOPS, 17.56 MiB/s [2024-10-08T19:01:42.048Z] 4642.00 IOPS, 18.13 MiB/s [2024-10-08T19:01:42.048Z] 4658.00 IOPS, 18.20 MiB/s [2024-10-08T19:01:42.048Z] 4662.29 IOPS, 18.21 MiB/s [2024-10-08T19:01:42.048Z] 4628.75 IOPS, 18.08 MiB/s [2024-10-08T19:01:42.048Z] 4617.22 IOPS, 18.04 MiB/s [2024-10-08T19:01:42.048Z] 4613.30 IOPS, 18.02 MiB/s [2024-10-08T19:01:42.048Z] 4613.09 IOPS, 18.02 MiB/s [2024-10-08T19:01:42.048Z] 4595.83 IOPS, 17.95 MiB/s [2024-10-08T19:01:42.048Z] 4568.77 IOPS, 17.85 MiB/s [2024-10-08T19:01:42.048Z] 4557.64 IOPS, 17.80 MiB/s [2024-10-08T19:01:42.048Z] 4563.53 IOPS, 17.83 MiB/s [2024-10-08T19:01:42.048Z] 4582.69 IOPS, 17.90 MiB/s [2024-10-08T19:01:42.048Z] 4578.18 IOPS, 17.88 MiB/s [2024-10-08T19:01:42.048Z] 4568.72 IOPS, 17.85 MiB/s [2024-10-08T19:01:42.048Z] 4565.42 IOPS, 17.83 MiB/s [2024-10-08T19:01:42.048Z] [2024-10-08 21:01:13.982118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.285 [2024-10-08 21:01:13.982224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.982936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.982978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.285 [2024-10-08 21:01:13.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.285 [2024-10-08 21:01:13.983573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.983614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.983691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.983737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.983793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.983834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.983888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.983930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.983985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.984877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.984895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.986931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.986991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.987916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.987966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.286 [2024-10-08 21:01:13.988664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.286 [2024-10-08 21:01:13.988716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.988809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.988852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.988895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.988947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.988992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.989911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.989965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.990914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.990965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.991006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.991062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.991102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.287 [2024-10-08 21:01:13.991201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.287 [2024-10-08 21:01:13.992744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.992795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.992840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.992883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.992926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.992951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.992978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.993068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.993128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.993168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.287 [2024-10-08 21:01:13.993225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.287 [2024-10-08 21:01:13.993264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.993889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.993936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.994893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.994970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.288 [2024-10-08 21:01:13.995788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.995898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.288 [2024-10-08 21:01:13.996553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.288 [2024-10-08 21:01:13.996606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.996944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.997912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.997965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.998063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.999785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.999811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.999841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.999861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.999886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.999904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:13.999957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:13.999999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.289 [2024-10-08 21:01:14.000843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:13.289 [2024-10-08 21:01:14.000868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.000886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.000910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.000928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.001908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.001949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.002929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.002954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.003935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.003978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.004033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.290 [2024-10-08 21:01:14.004073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.290 [2024-10-08 21:01:14.004127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.004264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.005729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.291 [2024-10-08 21:01:14.005755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.005791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.291 [2024-10-08 21:01:14.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.005837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.005856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.005880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.005898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.005922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.005957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.006978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.007917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.007956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:13.291 [2024-10-08 21:01:14.008797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.291 [2024-10-08 21:01:14.008815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.008839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.292 [2024-10-08 21:01:14.008857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.008881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.008899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.008923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.008941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.009931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.009949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.010955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.010995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.011051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.012784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.012814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.012834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.012859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.012878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.012902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.012920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.012944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.012997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.292 [2024-10-08 21:01:14.013527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.292 [2024-10-08 21:01:14.013566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.013911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.013956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.014940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.014992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.015937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.015963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.016058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.016099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.016155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.016196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.016250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.016344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.293 [2024-10-08 21:01:14.016384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.293 [2024-10-08 21:01:14.016439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.016927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.016951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.017002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.017059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.017100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.018769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.018795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.018825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.294 [2024-10-08 21:01:14.018845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.018870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.294 [2024-10-08 21:01:14.018888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.018912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.018954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.019970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.019987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.020939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.020986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.294 [2024-10-08 21:01:14.021027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.294 [2024-10-08 21:01:14.021082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.295 [2024-10-08 21:01:14.021895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.021920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.021938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.022947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.022990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.023947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.024050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.024091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.025816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.025875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:13.295 [2024-10-08 21:01:14.025944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.295 [2024-10-08 21:01:14.025988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.026934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.026987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.027934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.027952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.028940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.028990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.029031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.029129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.296 [2024-10-08 21:01:14.029185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.296 [2024-10-08 21:01:14.029226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.029915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.029975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.030017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.030072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.030111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.030166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.030205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.030259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.030299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.030367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.031789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.031826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.031857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.031876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.031901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.297 [2024-10-08 21:01:14.031919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.031943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.297 [2024-10-08 21:01:14.031984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.032909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.032961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.297 [2024-10-08 21:01:14.033871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.297 [2024-10-08 21:01:14.033895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.033912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.034955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.034974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.298 [2024-10-08 21:01:14.035077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.035951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.035991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.298 [2024-10-08 21:01:14.036765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.298 [2024-10-08 21:01:14.036782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.036807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.036824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.036848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.036871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.036915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.036938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.036990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.037046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.037085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.037142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.037194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.038847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.038873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.038903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.038965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.039937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.039995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.040939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.040975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.299 [2024-10-08 21:01:14.041604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.299 [2024-10-08 21:01:14.041643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.041754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.041797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.041888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.041930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.041996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.042929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.043968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.043997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.300 [2024-10-08 21:01:14.044054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.300 [2024-10-08 21:01:14.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.044930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.044987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.045052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.045091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.045155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.045259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.045299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.045363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.300 [2024-10-08 21:01:14.045403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.300 [2024-10-08 21:01:14.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.045924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.045967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.046893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.046966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.301 [2024-10-08 21:01:14.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.047908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.047925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:13.301 [2024-10-08 21:01:14.048935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.301 [2024-10-08 21:01:14.048953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.049989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.050036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.050113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.050155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.050241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.050284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:14.050361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:14.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:13.302 4465.95 IOPS, 17.45 MiB/s [2024-10-08T19:01:42.065Z] 4253.29 IOPS, 16.61 MiB/s [2024-10-08T19:01:42.065Z] 4059.95 IOPS, 15.86 MiB/s [2024-10-08T19:01:42.065Z] 3883.43 IOPS, 15.17 MiB/s [2024-10-08T19:01:42.065Z] 3721.62 IOPS, 14.54 MiB/s [2024-10-08T19:01:42.065Z] 3572.76 IOPS, 13.96 MiB/s [2024-10-08T19:01:42.065Z] 3492.35 IOPS, 13.64 MiB/s [2024-10-08T19:01:42.065Z] 3525.33 IOPS, 13.77 MiB/s [2024-10-08T19:01:42.065Z] 3560.32 IOPS, 13.91 MiB/s [2024-10-08T19:01:42.065Z] 3593.55 IOPS, 14.04 MiB/s [2024-10-08T19:01:42.065Z] 3629.13 IOPS, 14.18 MiB/s [2024-10-08T19:01:42.065Z] 3710.06 IOPS, 14.49 MiB/s [2024-10-08T19:01:42.065Z] 3788.06 IOPS, 14.80 MiB/s [2024-10-08T19:01:42.065Z] 3869.67 IOPS, 15.12 MiB/s [2024-10-08T19:01:42.065Z] 3944.79 IOPS, 15.41 MiB/s [2024-10-08T19:01:42.065Z] 3975.26 IOPS, 15.53 MiB/s [2024-10-08T19:01:42.065Z] 3985.61 IOPS, 15.57 MiB/s [2024-10-08T19:01:42.065Z] 3997.03 IOPS, 15.61 MiB/s [2024-10-08T19:01:42.065Z] 4007.05 IOPS, 15.65 MiB/s [2024-10-08T19:01:42.065Z] 4023.69 IOPS, 15.72 MiB/s [2024-10-08T19:01:42.065Z] 4062.82 IOPS, 15.87 MiB/s [2024-10-08T19:01:42.065Z] 4110.63 IOPS, 16.06 MiB/s [2024-10-08T19:01:42.065Z] 4164.55 IOPS, 16.27 MiB/s [2024-10-08T19:01:42.065Z] 4208.98 IOPS, 16.44 MiB/s [2024-10-08T19:01:42.065Z] [2024-10-08 21:01:37.820169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.820962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.820987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.821043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.821097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.821152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.302 [2024-10-08 21:01:37.821205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.821258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.821312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.821365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.302 [2024-10-08 21:01:37.821418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:13.302 [2024-10-08 21:01:37.821448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.821524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.822756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.822786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.822819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.822846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.822873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.822891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.822916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.822935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.822979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.823002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.823107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.823928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.823975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.823997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.824049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.824101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.824478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.303 [2024-10-08 21:01:37.824529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.303 [2024-10-08 21:01:37.824781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.303 [2024-10-08 21:01:37.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.303 4228.20 IOPS, 16.52 MiB/s [2024-10-08T19:01:42.067Z] 4231.91 IOPS, 16.53 MiB/s [2024-10-08T19:01:42.067Z] 4239.30 IOPS, 16.56 MiB/s [2024-10-08T19:01:42.067Z] 4239.70 IOPS, 16.56 MiB/s [2024-10-08T19:01:42.067Z] Received shutdown signal, test time was about 47.104320 seconds 00:34:13.304 00:34:13.304 Latency(us) 00:34:13.304 [2024-10-08T19:01:42.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.304 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:13.304 Verification LBA range: start 0x0 length 0x4000 00:34:13.304 Nvme0n1 : 47.10 4239.30 16.56 0.00 0.00 30140.33 276.10 6114363.16 00:34:13.304 [2024-10-08T19:01:42.067Z] =================================================================================================================== 00:34:13.304 [2024-10-08T19:01:42.067Z] Total : 4239.30 16.56 0.00 0.00 30140.33 276.10 6114363.16 00:34:13.304 21:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.870 rmmod nvme_tcp 00:34:13.870 rmmod nvme_fabrics 00:34:13.870 rmmod nvme_keyring 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1825154 ']' 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1825154 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1825154 ']' 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1825154 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825154 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825154' 00:34:13.870 killing process with pid 1825154 00:34:13.870 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1825154 00:34:13.871 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1825154 00:34:14.437 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:14.437 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:14.437 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:14.437 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.438 21:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.339 21:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.339 00:34:16.339 real 1m1.375s 00:34:16.339 user 3m12.307s 00:34:16.339 sys 0m15.370s 00:34:16.339 21:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:16.339 21:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:16.339 ************************************ 00:34:16.339 END TEST nvmf_host_multipath_status 00:34:16.339 ************************************ 00:34:16.339 21:01:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:16.339 21:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:16.339 21:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.339 21:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.339 ************************************ 00:34:16.339 START TEST nvmf_discovery_remove_ifc 00:34:16.339 ************************************ 00:34:16.339 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:16.598 * Looking for test storage... 00:34:16.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:16.598 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:16.598 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:16.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.599 --rc genhtml_branch_coverage=1 00:34:16.599 --rc genhtml_function_coverage=1 00:34:16.599 --rc genhtml_legend=1 00:34:16.599 --rc geninfo_all_blocks=1 00:34:16.599 --rc geninfo_unexecuted_blocks=1 00:34:16.599 00:34:16.599 ' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:16.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.599 --rc genhtml_branch_coverage=1 00:34:16.599 --rc genhtml_function_coverage=1 00:34:16.599 --rc genhtml_legend=1 00:34:16.599 --rc geninfo_all_blocks=1 00:34:16.599 --rc geninfo_unexecuted_blocks=1 00:34:16.599 00:34:16.599 ' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:16.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.599 --rc genhtml_branch_coverage=1 00:34:16.599 --rc genhtml_function_coverage=1 00:34:16.599 --rc genhtml_legend=1 00:34:16.599 --rc geninfo_all_blocks=1 00:34:16.599 --rc geninfo_unexecuted_blocks=1 00:34:16.599 00:34:16.599 ' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:16.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.599 --rc genhtml_branch_coverage=1 00:34:16.599 --rc genhtml_function_coverage=1 00:34:16.599 --rc genhtml_legend=1 00:34:16.599 --rc geninfo_all_blocks=1 00:34:16.599 --rc geninfo_unexecuted_blocks=1 00:34:16.599 00:34:16.599 ' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.599 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.600 21:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.886 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.886 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.886 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.886 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.886 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:19.887 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:19.887 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:19.887 Found net devices under 0000:84:00.0: cvl_0_0 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:19.887 Found net devices under 0000:84:00.1: cvl_0_1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:34:19.887 00:34:19.887 --- 10.0.0.2 ping statistics --- 00:34:19.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.887 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:34:19.887 00:34:19.887 --- 10.0.0.1 ping statistics --- 00:34:19.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.887 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:19.887 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1833616 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1833616 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1833616 ']' 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:19.888 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.888 [2024-10-08 21:01:48.351833] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:34:19.888 [2024-10-08 21:01:48.351925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.888 [2024-10-08 21:01:48.432838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.888 [2024-10-08 21:01:48.559669] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.888 [2024-10-08 21:01:48.559748] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.888 [2024-10-08 21:01:48.559765] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.888 [2024-10-08 21:01:48.559779] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.888 [2024-10-08 21:01:48.559791] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.888 [2024-10-08 21:01:48.560557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.147 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.148 [2024-10-08 21:01:48.734913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.148 [2024-10-08 21:01:48.743336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:20.148 null0 00:34:20.148 [2024-10-08 21:01:48.776153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1833761 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1833761 /tmp/host.sock 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1833761 ']' 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:20.148 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:20.148 21:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.148 [2024-10-08 21:01:48.860888] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:34:20.148 [2024-10-08 21:01:48.860982] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833761 ] 00:34:20.408 [2024-10-08 21:01:48.971583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.666 [2024-10-08 21:01:49.191524] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.234 21:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 21:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.494 21:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:21.494 21:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.494 21:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.476 [2024-10-08 21:01:51.158919] bdev_nvme.c:7257:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:22.476 [2024-10-08 21:01:51.158987] bdev_nvme.c:7343:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:22.476 [2024-10-08 21:01:51.159045] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:22.737 [2024-10-08 21:01:51.245406] bdev_nvme.c:7186:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:22.737 [2024-10-08 21:01:51.432142] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:22.737 [2024-10-08 21:01:51.432283] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:22.737 [2024-10-08 21:01:51.432374] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:22.737 [2024-10-08 21:01:51.432432] bdev_nvme.c:7076:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:22.737 [2024-10-08 21:01:51.432492] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.737 [2024-10-08 21:01:51.435816] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xeba6b0 was disconnected and freed. delete nvme_qpair. 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.737 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:22.997 21:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.948 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.208 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:24.208 21:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:25.147 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:25.148 21:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:26.088 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.347 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:26.347 21:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:27.285 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:27.286 21:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:28.224 [2024-10-08 21:01:56.871361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:28.224 [2024-10-08 21:01:56.871501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.224 [2024-10-08 21:01:56.871549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.224 [2024-10-08 21:01:56.871590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.224 [2024-10-08 21:01:56.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.224 [2024-10-08 21:01:56.871694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.224 [2024-10-08 21:01:56.871733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.224 [2024-10-08 21:01:56.871767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.224 [2024-10-08 21:01:56.871800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.224 [2024-10-08 21:01:56.871835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.224 [2024-10-08 21:01:56.871867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.224 [2024-10-08 21:01:56.871898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96fd0 is same with the state(6) to be set 00:34:28.224 [2024-10-08 21:01:56.881378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe96fd0 (9): Bad file descriptor 00:34:28.224 [2024-10-08 21:01:56.891459] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:28.225 21:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:29.164 [2024-10-08 21:01:57.926909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:29.164 [2024-10-08 21:01:57.927020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe96fd0 with addr=10.0.0.2, port=4420 00:34:29.164 [2024-10-08 21:01:57.927092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96fd0 is same with the state(6) to be set 00:34:29.424 [2024-10-08 21:01:57.927170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe96fd0 (9): Bad file descriptor 00:34:29.424 [2024-10-08 21:01:57.928017] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:29.424 [2024-10-08 21:01:57.928120] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:29.424 [2024-10-08 21:01:57.928180] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:29.424 [2024-10-08 21:01:57.928218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:29.424 [2024-10-08 21:01:57.928278] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.424 [2024-10-08 21:01:57.928338] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:29.424 21:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.424 21:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:29.424 21:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:30.392 [2024-10-08 21:01:58.930888] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:30.392 [2024-10-08 21:01:58.930958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:30.392 [2024-10-08 21:01:58.930993] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:30.392 [2024-10-08 21:01:58.931025] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:30.392 [2024-10-08 21:01:58.931076] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.392 [2024-10-08 21:01:58.931152] bdev_nvme.c:7008:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:30.393 [2024-10-08 21:01:58.931231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.393 [2024-10-08 21:01:58.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.393 [2024-10-08 21:01:58.931325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.393 [2024-10-08 21:01:58.931358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.393 [2024-10-08 21:01:58.931392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.393 [2024-10-08 21:01:58.931423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.393 [2024-10-08 21:01:58.931456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.393 [2024-10-08 21:01:58.931487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.393 [2024-10-08 21:01:58.931530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.393 [2024-10-08 21:01:58.931562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.393 [2024-10-08 21:01:58.931593] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:30.393 [2024-10-08 21:01:58.931644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86300 (9): Bad file descriptor 00:34:30.393 [2024-10-08 21:01:58.932115] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:30.393 [2024-10-08 21:01:58.932171] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:30.393 21:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:30.393 21:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:31.805 21:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:32.373 [2024-10-08 21:02:00.986873] bdev_nvme.c:7257:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:32.373 [2024-10-08 21:02:00.986933] bdev_nvme.c:7343:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:32.373 [2024-10-08 21:02:00.987007] bdev_nvme.c:7220:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:32.373 [2024-10-08 21:02:01.073355] bdev_nvme.c:7186:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:32.633 [2024-10-08 21:02:01.179354] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:32.633 [2024-10-08 21:02:01.179465] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:32.633 [2024-10-08 21:02:01.179545] bdev_nvme.c:8053:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:32.633 [2024-10-08 21:02:01.179599] bdev_nvme.c:7076:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:32.633 [2024-10-08 21:02:01.179631] bdev_nvme.c:7035:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:32.633 [2024-10-08 21:02:01.184689] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xea16a0 was disconnected and freed. delete nvme_qpair. 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1833761 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1833761 ']' 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1833761 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1833761 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1833761' 00:34:32.633 killing process with pid 1833761 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1833761 00:34:32.633 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1833761 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.201 rmmod nvme_tcp 00:34:33.201 rmmod nvme_fabrics 00:34:33.201 rmmod nvme_keyring 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1833616 ']' 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1833616 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1833616 ']' 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1833616 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1833616 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1833616' 00:34:33.201 killing process with pid 1833616 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1833616 00:34:33.201 21:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1833616 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.771 21:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.678 21:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.678 00:34:35.678 real 0m19.373s 00:34:35.678 user 0m27.322s 00:34:35.678 sys 0m4.115s 00:34:35.678 21:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:35.678 21:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:35.678 ************************************ 00:34:35.678 END TEST nvmf_discovery_remove_ifc 00:34:35.678 ************************************ 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.937 ************************************ 00:34:35.937 START TEST nvmf_identify_kernel_target 00:34:35.937 ************************************ 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:35.937 * Looking for test storage... 00:34:35.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:35.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.937 --rc genhtml_branch_coverage=1 00:34:35.937 --rc genhtml_function_coverage=1 00:34:35.937 --rc genhtml_legend=1 00:34:35.937 --rc geninfo_all_blocks=1 00:34:35.937 --rc geninfo_unexecuted_blocks=1 00:34:35.937 00:34:35.937 ' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:35.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.937 --rc genhtml_branch_coverage=1 00:34:35.937 --rc genhtml_function_coverage=1 00:34:35.937 --rc genhtml_legend=1 00:34:35.937 --rc geninfo_all_blocks=1 00:34:35.937 --rc geninfo_unexecuted_blocks=1 00:34:35.937 00:34:35.937 ' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:35.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.937 --rc genhtml_branch_coverage=1 00:34:35.937 --rc genhtml_function_coverage=1 00:34:35.937 --rc genhtml_legend=1 00:34:35.937 --rc geninfo_all_blocks=1 00:34:35.937 --rc geninfo_unexecuted_blocks=1 00:34:35.937 00:34:35.937 ' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:35.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.937 --rc genhtml_branch_coverage=1 00:34:35.937 --rc genhtml_function_coverage=1 00:34:35.937 --rc genhtml_legend=1 00:34:35.937 --rc geninfo_all_blocks=1 00:34:35.937 --rc geninfo_unexecuted_blocks=1 00:34:35.937 00:34:35.937 ' 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.937 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:35.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.938 21:02:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:38.475 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:38.475 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:38.475 Found net devices under 0000:84:00.0: cvl_0_0 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:38.475 Found net devices under 0000:84:00.1: cvl_0_1 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.475 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:38.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:34:38.735 00:34:38.735 --- 10.0.0.2 ping statistics --- 00:34:38.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.735 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:34:38.735 00:34:38.735 --- 10.0.0.1 ping statistics --- 00:34:38.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.735 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:38.735 21:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:40.643 Waiting for block devices as requested 00:34:40.643 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:40.643 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:40.901 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:40.901 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:40.901 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:41.161 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:41.161 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:41.161 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:41.161 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:41.420 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:41.420 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:41.420 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:41.680 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:41.680 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:41.680 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:41.680 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:41.941 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:41.941 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:42.201 No valid GPT data, bailing 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:42.201 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:34:42.201 00:34:42.201 Discovery Log Number of Records 2, Generation counter 2 00:34:42.201 =====Discovery Log Entry 0====== 00:34:42.201 trtype: tcp 00:34:42.201 adrfam: ipv4 00:34:42.201 subtype: current discovery subsystem 00:34:42.201 treq: not specified, sq flow control disable supported 00:34:42.201 portid: 1 00:34:42.202 trsvcid: 4420 00:34:42.202 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:42.202 traddr: 10.0.0.1 00:34:42.202 eflags: none 00:34:42.202 sectype: none 00:34:42.202 =====Discovery Log Entry 1====== 00:34:42.202 trtype: tcp 00:34:42.202 adrfam: ipv4 00:34:42.202 subtype: nvme subsystem 00:34:42.202 treq: not specified, sq flow control disable supported 00:34:42.202 portid: 1 00:34:42.202 trsvcid: 4420 00:34:42.202 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:42.202 traddr: 10.0.0.1 00:34:42.202 eflags: none 00:34:42.202 sectype: none 00:34:42.202 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:42.202 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:42.462 ===================================================== 00:34:42.462 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:42.462 ===================================================== 00:34:42.462 Controller Capabilities/Features 00:34:42.462 ================================ 00:34:42.462 Vendor ID: 0000 00:34:42.462 Subsystem Vendor ID: 0000 00:34:42.462 Serial Number: d6e62b6881bd3e3b3249 00:34:42.462 Model Number: Linux 00:34:42.462 Firmware Version: 6.8.9-20 00:34:42.462 Recommended Arb Burst: 0 00:34:42.462 IEEE OUI Identifier: 00 00 00 00:34:42.462 Multi-path I/O 00:34:42.462 May have multiple subsystem ports: No 00:34:42.462 May have multiple controllers: No 00:34:42.462 Associated with SR-IOV VF: No 00:34:42.462 Max Data Transfer Size: Unlimited 00:34:42.462 Max Number of Namespaces: 0 00:34:42.462 Max Number of I/O Queues: 1024 00:34:42.462 NVMe Specification Version (VS): 1.3 00:34:42.462 NVMe Specification Version (Identify): 1.3 00:34:42.462 Maximum Queue Entries: 1024 00:34:42.462 Contiguous Queues Required: No 00:34:42.463 Arbitration Mechanisms Supported 00:34:42.463 Weighted Round Robin: Not Supported 00:34:42.463 Vendor Specific: Not Supported 00:34:42.463 Reset Timeout: 7500 ms 00:34:42.463 Doorbell Stride: 4 bytes 00:34:42.463 NVM Subsystem Reset: Not Supported 00:34:42.463 Command Sets Supported 00:34:42.463 NVM Command Set: Supported 00:34:42.463 Boot Partition: Not Supported 00:34:42.463 Memory Page Size Minimum: 4096 bytes 00:34:42.463 Memory Page Size Maximum: 4096 bytes 00:34:42.463 Persistent Memory Region: Not Supported 00:34:42.463 Optional Asynchronous Events Supported 00:34:42.463 Namespace Attribute Notices: Not Supported 00:34:42.463 Firmware Activation Notices: Not Supported 00:34:42.463 ANA Change Notices: Not Supported 00:34:42.463 PLE Aggregate Log Change Notices: Not Supported 00:34:42.463 LBA Status Info Alert Notices: Not Supported 00:34:42.463 EGE Aggregate Log Change Notices: Not Supported 00:34:42.463 Normal NVM Subsystem Shutdown event: Not Supported 00:34:42.463 Zone Descriptor Change Notices: Not Supported 00:34:42.463 Discovery Log Change Notices: Supported 00:34:42.463 Controller Attributes 00:34:42.463 128-bit Host Identifier: Not Supported 00:34:42.463 Non-Operational Permissive Mode: Not Supported 00:34:42.463 NVM Sets: Not Supported 00:34:42.463 Read Recovery Levels: Not Supported 00:34:42.463 Endurance Groups: Not Supported 00:34:42.463 Predictable Latency Mode: Not Supported 00:34:42.463 Traffic Based Keep ALive: Not Supported 00:34:42.463 Namespace Granularity: Not Supported 00:34:42.463 SQ Associations: Not Supported 00:34:42.463 UUID List: Not Supported 00:34:42.463 Multi-Domain Subsystem: Not Supported 00:34:42.463 Fixed Capacity Management: Not Supported 00:34:42.463 Variable Capacity Management: Not Supported 00:34:42.463 Delete Endurance Group: Not Supported 00:34:42.463 Delete NVM Set: Not Supported 00:34:42.463 Extended LBA Formats Supported: Not Supported 00:34:42.463 Flexible Data Placement Supported: Not Supported 00:34:42.463 00:34:42.463 Controller Memory Buffer Support 00:34:42.463 ================================ 00:34:42.463 Supported: No 00:34:42.463 00:34:42.463 Persistent Memory Region Support 00:34:42.463 ================================ 00:34:42.463 Supported: No 00:34:42.463 00:34:42.463 Admin Command Set Attributes 00:34:42.463 ============================ 00:34:42.463 Security Send/Receive: Not Supported 00:34:42.463 Format NVM: Not Supported 00:34:42.463 Firmware Activate/Download: Not Supported 00:34:42.463 Namespace Management: Not Supported 00:34:42.463 Device Self-Test: Not Supported 00:34:42.463 Directives: Not Supported 00:34:42.463 NVMe-MI: Not Supported 00:34:42.463 Virtualization Management: Not Supported 00:34:42.463 Doorbell Buffer Config: Not Supported 00:34:42.463 Get LBA Status Capability: Not Supported 00:34:42.463 Command & Feature Lockdown Capability: Not Supported 00:34:42.463 Abort Command Limit: 1 00:34:42.463 Async Event Request Limit: 1 00:34:42.463 Number of Firmware Slots: N/A 00:34:42.463 Firmware Slot 1 Read-Only: N/A 00:34:42.463 Firmware Activation Without Reset: N/A 00:34:42.463 Multiple Update Detection Support: N/A 00:34:42.463 Firmware Update Granularity: No Information Provided 00:34:42.463 Per-Namespace SMART Log: No 00:34:42.463 Asymmetric Namespace Access Log Page: Not Supported 00:34:42.463 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:42.463 Command Effects Log Page: Not Supported 00:34:42.463 Get Log Page Extended Data: Supported 00:34:42.463 Telemetry Log Pages: Not Supported 00:34:42.463 Persistent Event Log Pages: Not Supported 00:34:42.463 Supported Log Pages Log Page: May Support 00:34:42.463 Commands Supported & Effects Log Page: Not Supported 00:34:42.463 Feature Identifiers & Effects Log Page:May Support 00:34:42.463 NVMe-MI Commands & Effects Log Page: May Support 00:34:42.463 Data Area 4 for Telemetry Log: Not Supported 00:34:42.463 Error Log Page Entries Supported: 1 00:34:42.463 Keep Alive: Not Supported 00:34:42.463 00:34:42.463 NVM Command Set Attributes 00:34:42.463 ========================== 00:34:42.463 Submission Queue Entry Size 00:34:42.463 Max: 1 00:34:42.463 Min: 1 00:34:42.463 Completion Queue Entry Size 00:34:42.463 Max: 1 00:34:42.463 Min: 1 00:34:42.463 Number of Namespaces: 0 00:34:42.463 Compare Command: Not Supported 00:34:42.463 Write Uncorrectable Command: Not Supported 00:34:42.463 Dataset Management Command: Not Supported 00:34:42.463 Write Zeroes Command: Not Supported 00:34:42.463 Set Features Save Field: Not Supported 00:34:42.463 Reservations: Not Supported 00:34:42.463 Timestamp: Not Supported 00:34:42.463 Copy: Not Supported 00:34:42.463 Volatile Write Cache: Not Present 00:34:42.463 Atomic Write Unit (Normal): 1 00:34:42.463 Atomic Write Unit (PFail): 1 00:34:42.463 Atomic Compare & Write Unit: 1 00:34:42.463 Fused Compare & Write: Not Supported 00:34:42.463 Scatter-Gather List 00:34:42.463 SGL Command Set: Supported 00:34:42.463 SGL Keyed: Not Supported 00:34:42.463 SGL Bit Bucket Descriptor: Not Supported 00:34:42.463 SGL Metadata Pointer: Not Supported 00:34:42.463 Oversized SGL: Not Supported 00:34:42.463 SGL Metadata Address: Not Supported 00:34:42.463 SGL Offset: Supported 00:34:42.463 Transport SGL Data Block: Not Supported 00:34:42.463 Replay Protected Memory Block: Not Supported 00:34:42.463 00:34:42.463 Firmware Slot Information 00:34:42.463 ========================= 00:34:42.463 Active slot: 0 00:34:42.463 00:34:42.463 00:34:42.463 Error Log 00:34:42.463 ========= 00:34:42.463 00:34:42.463 Active Namespaces 00:34:42.463 ================= 00:34:42.463 Discovery Log Page 00:34:42.463 ================== 00:34:42.463 Generation Counter: 2 00:34:42.463 Number of Records: 2 00:34:42.463 Record Format: 0 00:34:42.463 00:34:42.463 Discovery Log Entry 0 00:34:42.463 ---------------------- 00:34:42.463 Transport Type: 3 (TCP) 00:34:42.463 Address Family: 1 (IPv4) 00:34:42.463 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:42.463 Entry Flags: 00:34:42.463 Duplicate Returned Information: 0 00:34:42.463 Explicit Persistent Connection Support for Discovery: 0 00:34:42.463 Transport Requirements: 00:34:42.463 Secure Channel: Not Specified 00:34:42.463 Port ID: 1 (0x0001) 00:34:42.463 Controller ID: 65535 (0xffff) 00:34:42.463 Admin Max SQ Size: 32 00:34:42.463 Transport Service Identifier: 4420 00:34:42.463 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:42.463 Transport Address: 10.0.0.1 00:34:42.463 Discovery Log Entry 1 00:34:42.463 ---------------------- 00:34:42.463 Transport Type: 3 (TCP) 00:34:42.463 Address Family: 1 (IPv4) 00:34:42.463 Subsystem Type: 2 (NVM Subsystem) 00:34:42.463 Entry Flags: 00:34:42.463 Duplicate Returned Information: 0 00:34:42.463 Explicit Persistent Connection Support for Discovery: 0 00:34:42.463 Transport Requirements: 00:34:42.463 Secure Channel: Not Specified 00:34:42.463 Port ID: 1 (0x0001) 00:34:42.463 Controller ID: 65535 (0xffff) 00:34:42.463 Admin Max SQ Size: 32 00:34:42.463 Transport Service Identifier: 4420 00:34:42.463 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:42.463 Transport Address: 10.0.0.1 00:34:42.463 21:02:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.463 get_feature(0x01) failed 00:34:42.463 get_feature(0x02) failed 00:34:42.463 get_feature(0x04) failed 00:34:42.463 ===================================================== 00:34:42.463 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.463 ===================================================== 00:34:42.463 Controller Capabilities/Features 00:34:42.463 ================================ 00:34:42.463 Vendor ID: 0000 00:34:42.463 Subsystem Vendor ID: 0000 00:34:42.463 Serial Number: e60ba6be5c45c7effe75 00:34:42.463 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:42.463 Firmware Version: 6.8.9-20 00:34:42.463 Recommended Arb Burst: 6 00:34:42.463 IEEE OUI Identifier: 00 00 00 00:34:42.463 Multi-path I/O 00:34:42.463 May have multiple subsystem ports: Yes 00:34:42.463 May have multiple controllers: Yes 00:34:42.463 Associated with SR-IOV VF: No 00:34:42.463 Max Data Transfer Size: Unlimited 00:34:42.463 Max Number of Namespaces: 1024 00:34:42.463 Max Number of I/O Queues: 128 00:34:42.463 NVMe Specification Version (VS): 1.3 00:34:42.463 NVMe Specification Version (Identify): 1.3 00:34:42.463 Maximum Queue Entries: 1024 00:34:42.463 Contiguous Queues Required: No 00:34:42.463 Arbitration Mechanisms Supported 00:34:42.463 Weighted Round Robin: Not Supported 00:34:42.463 Vendor Specific: Not Supported 00:34:42.463 Reset Timeout: 7500 ms 00:34:42.463 Doorbell Stride: 4 bytes 00:34:42.463 NVM Subsystem Reset: Not Supported 00:34:42.463 Command Sets Supported 00:34:42.463 NVM Command Set: Supported 00:34:42.463 Boot Partition: Not Supported 00:34:42.463 Memory Page Size Minimum: 4096 bytes 00:34:42.463 Memory Page Size Maximum: 4096 bytes 00:34:42.463 Persistent Memory Region: Not Supported 00:34:42.463 Optional Asynchronous Events Supported 00:34:42.464 Namespace Attribute Notices: Supported 00:34:42.464 Firmware Activation Notices: Not Supported 00:34:42.464 ANA Change Notices: Supported 00:34:42.464 PLE Aggregate Log Change Notices: Not Supported 00:34:42.464 LBA Status Info Alert Notices: Not Supported 00:34:42.464 EGE Aggregate Log Change Notices: Not Supported 00:34:42.464 Normal NVM Subsystem Shutdown event: Not Supported 00:34:42.464 Zone Descriptor Change Notices: Not Supported 00:34:42.464 Discovery Log Change Notices: Not Supported 00:34:42.464 Controller Attributes 00:34:42.464 128-bit Host Identifier: Supported 00:34:42.464 Non-Operational Permissive Mode: Not Supported 00:34:42.464 NVM Sets: Not Supported 00:34:42.464 Read Recovery Levels: Not Supported 00:34:42.464 Endurance Groups: Not Supported 00:34:42.464 Predictable Latency Mode: Not Supported 00:34:42.464 Traffic Based Keep ALive: Supported 00:34:42.464 Namespace Granularity: Not Supported 00:34:42.464 SQ Associations: Not Supported 00:34:42.464 UUID List: Not Supported 00:34:42.464 Multi-Domain Subsystem: Not Supported 00:34:42.464 Fixed Capacity Management: Not Supported 00:34:42.464 Variable Capacity Management: Not Supported 00:34:42.464 Delete Endurance Group: Not Supported 00:34:42.464 Delete NVM Set: Not Supported 00:34:42.464 Extended LBA Formats Supported: Not Supported 00:34:42.464 Flexible Data Placement Supported: Not Supported 00:34:42.464 00:34:42.464 Controller Memory Buffer Support 00:34:42.464 ================================ 00:34:42.464 Supported: No 00:34:42.464 00:34:42.464 Persistent Memory Region Support 00:34:42.464 ================================ 00:34:42.464 Supported: No 00:34:42.464 00:34:42.464 Admin Command Set Attributes 00:34:42.464 ============================ 00:34:42.464 Security Send/Receive: Not Supported 00:34:42.464 Format NVM: Not Supported 00:34:42.464 Firmware Activate/Download: Not Supported 00:34:42.464 Namespace Management: Not Supported 00:34:42.464 Device Self-Test: Not Supported 00:34:42.464 Directives: Not Supported 00:34:42.464 NVMe-MI: Not Supported 00:34:42.464 Virtualization Management: Not Supported 00:34:42.464 Doorbell Buffer Config: Not Supported 00:34:42.464 Get LBA Status Capability: Not Supported 00:34:42.464 Command & Feature Lockdown Capability: Not Supported 00:34:42.464 Abort Command Limit: 4 00:34:42.464 Async Event Request Limit: 4 00:34:42.464 Number of Firmware Slots: N/A 00:34:42.464 Firmware Slot 1 Read-Only: N/A 00:34:42.464 Firmware Activation Without Reset: N/A 00:34:42.464 Multiple Update Detection Support: N/A 00:34:42.464 Firmware Update Granularity: No Information Provided 00:34:42.464 Per-Namespace SMART Log: Yes 00:34:42.464 Asymmetric Namespace Access Log Page: Supported 00:34:42.464 ANA Transition Time : 10 sec 00:34:42.464 00:34:42.464 Asymmetric Namespace Access Capabilities 00:34:42.464 ANA Optimized State : Supported 00:34:42.464 ANA Non-Optimized State : Supported 00:34:42.464 ANA Inaccessible State : Supported 00:34:42.464 ANA Persistent Loss State : Supported 00:34:42.464 ANA Change State : Supported 00:34:42.464 ANAGRPID is not changed : No 00:34:42.464 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:42.464 00:34:42.464 ANA Group Identifier Maximum : 128 00:34:42.464 Number of ANA Group Identifiers : 128 00:34:42.464 Max Number of Allowed Namespaces : 1024 00:34:42.464 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:42.464 Command Effects Log Page: Supported 00:34:42.464 Get Log Page Extended Data: Supported 00:34:42.464 Telemetry Log Pages: Not Supported 00:34:42.464 Persistent Event Log Pages: Not Supported 00:34:42.464 Supported Log Pages Log Page: May Support 00:34:42.464 Commands Supported & Effects Log Page: Not Supported 00:34:42.464 Feature Identifiers & Effects Log Page:May Support 00:34:42.464 NVMe-MI Commands & Effects Log Page: May Support 00:34:42.464 Data Area 4 for Telemetry Log: Not Supported 00:34:42.464 Error Log Page Entries Supported: 128 00:34:42.464 Keep Alive: Supported 00:34:42.464 Keep Alive Granularity: 1000 ms 00:34:42.464 00:34:42.464 NVM Command Set Attributes 00:34:42.464 ========================== 00:34:42.464 Submission Queue Entry Size 00:34:42.464 Max: 64 00:34:42.464 Min: 64 00:34:42.464 Completion Queue Entry Size 00:34:42.464 Max: 16 00:34:42.464 Min: 16 00:34:42.464 Number of Namespaces: 1024 00:34:42.464 Compare Command: Not Supported 00:34:42.464 Write Uncorrectable Command: Not Supported 00:34:42.464 Dataset Management Command: Supported 00:34:42.464 Write Zeroes Command: Supported 00:34:42.464 Set Features Save Field: Not Supported 00:34:42.464 Reservations: Not Supported 00:34:42.464 Timestamp: Not Supported 00:34:42.464 Copy: Not Supported 00:34:42.464 Volatile Write Cache: Present 00:34:42.464 Atomic Write Unit (Normal): 1 00:34:42.464 Atomic Write Unit (PFail): 1 00:34:42.464 Atomic Compare & Write Unit: 1 00:34:42.464 Fused Compare & Write: Not Supported 00:34:42.464 Scatter-Gather List 00:34:42.464 SGL Command Set: Supported 00:34:42.464 SGL Keyed: Not Supported 00:34:42.464 SGL Bit Bucket Descriptor: Not Supported 00:34:42.464 SGL Metadata Pointer: Not Supported 00:34:42.464 Oversized SGL: Not Supported 00:34:42.464 SGL Metadata Address: Not Supported 00:34:42.464 SGL Offset: Supported 00:34:42.464 Transport SGL Data Block: Not Supported 00:34:42.464 Replay Protected Memory Block: Not Supported 00:34:42.464 00:34:42.464 Firmware Slot Information 00:34:42.464 ========================= 00:34:42.464 Active slot: 0 00:34:42.464 00:34:42.464 Asymmetric Namespace Access 00:34:42.464 =========================== 00:34:42.464 Change Count : 0 00:34:42.464 Number of ANA Group Descriptors : 1 00:34:42.464 ANA Group Descriptor : 0 00:34:42.464 ANA Group ID : 1 00:34:42.464 Number of NSID Values : 1 00:34:42.464 Change Count : 0 00:34:42.464 ANA State : 1 00:34:42.464 Namespace Identifier : 1 00:34:42.464 00:34:42.464 Commands Supported and Effects 00:34:42.464 ============================== 00:34:42.464 Admin Commands 00:34:42.464 -------------- 00:34:42.464 Get Log Page (02h): Supported 00:34:42.464 Identify (06h): Supported 00:34:42.464 Abort (08h): Supported 00:34:42.464 Set Features (09h): Supported 00:34:42.464 Get Features (0Ah): Supported 00:34:42.464 Asynchronous Event Request (0Ch): Supported 00:34:42.464 Keep Alive (18h): Supported 00:34:42.464 I/O Commands 00:34:42.464 ------------ 00:34:42.464 Flush (00h): Supported 00:34:42.464 Write (01h): Supported LBA-Change 00:34:42.464 Read (02h): Supported 00:34:42.464 Write Zeroes (08h): Supported LBA-Change 00:34:42.464 Dataset Management (09h): Supported 00:34:42.464 00:34:42.464 Error Log 00:34:42.464 ========= 00:34:42.464 Entry: 0 00:34:42.464 Error Count: 0x3 00:34:42.464 Submission Queue Id: 0x0 00:34:42.464 Command Id: 0x5 00:34:42.464 Phase Bit: 0 00:34:42.464 Status Code: 0x2 00:34:42.464 Status Code Type: 0x0 00:34:42.464 Do Not Retry: 1 00:34:42.464 Error Location: 0x28 00:34:42.464 LBA: 0x0 00:34:42.464 Namespace: 0x0 00:34:42.464 Vendor Log Page: 0x0 00:34:42.464 ----------- 00:34:42.464 Entry: 1 00:34:42.464 Error Count: 0x2 00:34:42.464 Submission Queue Id: 0x0 00:34:42.464 Command Id: 0x5 00:34:42.464 Phase Bit: 0 00:34:42.464 Status Code: 0x2 00:34:42.464 Status Code Type: 0x0 00:34:42.464 Do Not Retry: 1 00:34:42.464 Error Location: 0x28 00:34:42.464 LBA: 0x0 00:34:42.464 Namespace: 0x0 00:34:42.464 Vendor Log Page: 0x0 00:34:42.464 ----------- 00:34:42.464 Entry: 2 00:34:42.464 Error Count: 0x1 00:34:42.464 Submission Queue Id: 0x0 00:34:42.464 Command Id: 0x4 00:34:42.464 Phase Bit: 0 00:34:42.464 Status Code: 0x2 00:34:42.464 Status Code Type: 0x0 00:34:42.464 Do Not Retry: 1 00:34:42.464 Error Location: 0x28 00:34:42.464 LBA: 0x0 00:34:42.464 Namespace: 0x0 00:34:42.464 Vendor Log Page: 0x0 00:34:42.464 00:34:42.464 Number of Queues 00:34:42.464 ================ 00:34:42.464 Number of I/O Submission Queues: 128 00:34:42.464 Number of I/O Completion Queues: 128 00:34:42.464 00:34:42.464 ZNS Specific Controller Data 00:34:42.464 ============================ 00:34:42.464 Zone Append Size Limit: 0 00:34:42.464 00:34:42.464 00:34:42.464 Active Namespaces 00:34:42.464 ================= 00:34:42.464 get_feature(0x05) failed 00:34:42.464 Namespace ID:1 00:34:42.464 Command Set Identifier: NVM (00h) 00:34:42.464 Deallocate: Supported 00:34:42.464 Deallocated/Unwritten Error: Not Supported 00:34:42.464 Deallocated Read Value: Unknown 00:34:42.464 Deallocate in Write Zeroes: Not Supported 00:34:42.464 Deallocated Guard Field: 0xFFFF 00:34:42.464 Flush: Supported 00:34:42.464 Reservation: Not Supported 00:34:42.464 Namespace Sharing Capabilities: Multiple Controllers 00:34:42.464 Size (in LBAs): 1953525168 (931GiB) 00:34:42.464 Capacity (in LBAs): 1953525168 (931GiB) 00:34:42.464 Utilization (in LBAs): 1953525168 (931GiB) 00:34:42.464 UUID: 9eca8546-35e5-40eb-832f-b4e4609a81e5 00:34:42.464 Thin Provisioning: Not Supported 00:34:42.465 Per-NS Atomic Units: Yes 00:34:42.465 Atomic Boundary Size (Normal): 0 00:34:42.465 Atomic Boundary Size (PFail): 0 00:34:42.465 Atomic Boundary Offset: 0 00:34:42.465 NGUID/EUI64 Never Reused: No 00:34:42.465 ANA group ID: 1 00:34:42.465 Namespace Write Protected: No 00:34:42.465 Number of LBA Formats: 1 00:34:42.465 Current LBA Format: LBA Format #00 00:34:42.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:42.465 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.465 rmmod nvme_tcp 00:34:42.465 rmmod nvme_fabrics 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.465 21:02:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:45.000 21:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:46.375 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:46.375 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:46.375 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:47.310 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:47.310 00:34:47.310 real 0m11.571s 00:34:47.310 user 0m2.580s 00:34:47.310 sys 0m5.008s 00:34:47.310 21:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.310 21:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:47.310 ************************************ 00:34:47.310 END TEST nvmf_identify_kernel_target 00:34:47.310 ************************************ 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.569 ************************************ 00:34:47.569 START TEST nvmf_auth_host 00:34:47.569 ************************************ 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:47.569 * Looking for test storage... 00:34:47.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:34:47.569 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.829 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:47.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.830 --rc genhtml_branch_coverage=1 00:34:47.830 --rc genhtml_function_coverage=1 00:34:47.830 --rc genhtml_legend=1 00:34:47.830 --rc geninfo_all_blocks=1 00:34:47.830 --rc geninfo_unexecuted_blocks=1 00:34:47.830 00:34:47.830 ' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:47.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.830 --rc genhtml_branch_coverage=1 00:34:47.830 --rc genhtml_function_coverage=1 00:34:47.830 --rc genhtml_legend=1 00:34:47.830 --rc geninfo_all_blocks=1 00:34:47.830 --rc geninfo_unexecuted_blocks=1 00:34:47.830 00:34:47.830 ' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:47.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.830 --rc genhtml_branch_coverage=1 00:34:47.830 --rc genhtml_function_coverage=1 00:34:47.830 --rc genhtml_legend=1 00:34:47.830 --rc geninfo_all_blocks=1 00:34:47.830 --rc geninfo_unexecuted_blocks=1 00:34:47.830 00:34:47.830 ' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:47.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.830 --rc genhtml_branch_coverage=1 00:34:47.830 --rc genhtml_function_coverage=1 00:34:47.830 --rc genhtml_legend=1 00:34:47.830 --rc geninfo_all_blocks=1 00:34:47.830 --rc geninfo_unexecuted_blocks=1 00:34:47.830 00:34:47.830 ' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:47.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.830 21:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:51.126 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:51.126 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:51.126 Found net devices under 0000:84:00.0: cvl_0_0 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:51.126 Found net devices under 0000:84:00.1: cvl_0_1 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:51.126 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:51.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:34:51.127 00:34:51.127 --- 10.0.0.2 ping statistics --- 00:34:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.127 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:34:51.127 00:34:51.127 --- 10.0.0.1 ping statistics --- 00:34:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.127 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1841147 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1841147 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1841147 ']' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:51.127 21:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4a7dab670bf75000caad0b0db04d1a41 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.erA 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4a7dab670bf75000caad0b0db04d1a41 0 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4a7dab670bf75000caad0b0db04d1a41 0 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4a7dab670bf75000caad0b0db04d1a41 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:52.067 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.erA 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.erA 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.erA 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=306b87e3b74ceda72768c7ae414c57b7183b2bab7ee02cce6e5227d896912cfd 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.oTj 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 306b87e3b74ceda72768c7ae414c57b7183b2bab7ee02cce6e5227d896912cfd 3 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 306b87e3b74ceda72768c7ae414c57b7183b2bab7ee02cce6e5227d896912cfd 3 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=306b87e3b74ceda72768c7ae414c57b7183b2bab7ee02cce6e5227d896912cfd 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.oTj 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.oTj 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oTj 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d7bdfdc322ab098f699518088a4c23eacad6dbca061ad84d 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.LjN 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d7bdfdc322ab098f699518088a4c23eacad6dbca061ad84d 0 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d7bdfdc322ab098f699518088a4c23eacad6dbca061ad84d 0 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d7bdfdc322ab098f699518088a4c23eacad6dbca061ad84d 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:52.327 21:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.LjN 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.LjN 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LjN 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=da5d2b8aa12402b0a58303c77e47f09a9aa5a78b8b81d9ef 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.eih 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key da5d2b8aa12402b0a58303c77e47f09a9aa5a78b8b81d9ef 2 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 da5d2b8aa12402b0a58303c77e47f09a9aa5a78b8b81d9ef 2 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=da5d2b8aa12402b0a58303c77e47f09a9aa5a78b8b81d9ef 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:34:52.327 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.eih 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.eih 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eih 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6a394cf31ed226da2445a6a3d3883505 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Jo6 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6a394cf31ed226da2445a6a3d3883505 1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6a394cf31ed226da2445a6a3d3883505 1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6a394cf31ed226da2445a6a3d3883505 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Jo6 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Jo6 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Jo6 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0c922dc9e334a109e9d80aaf05a54620 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.j5z 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0c922dc9e334a109e9d80aaf05a54620 1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0c922dc9e334a109e9d80aaf05a54620 1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0c922dc9e334a109e9d80aaf05a54620 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:34:52.588 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.j5z 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.j5z 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.j5z 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=12be26a70a4e0a4834f3ea3f26d85e814ac6dc6a4b63a95a 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.MPw 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 12be26a70a4e0a4834f3ea3f26d85e814ac6dc6a4b63a95a 2 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 12be26a70a4e0a4834f3ea3f26d85e814ac6dc6a4b63a95a 2 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=12be26a70a4e0a4834f3ea3f26d85e814ac6dc6a4b63a95a 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.MPw 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.MPw 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.MPw 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bda14254c1f79c7dbca887bb6de3a7f3 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.jkw 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bda14254c1f79c7dbca887bb6de3a7f3 0 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bda14254c1f79c7dbca887bb6de3a7f3 0 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:52.849 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bda14254c1f79c7dbca887bb6de3a7f3 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.jkw 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.jkw 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jkw 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:34:52.850 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=605a9d3b35bebaa6899218e9e70c173b854f533966561d3c407c303858a9276f 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0Ad 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 605a9d3b35bebaa6899218e9e70c173b854f533966561d3c407c303858a9276f 3 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 605a9d3b35bebaa6899218e9e70c173b854f533966561d3c407c303858a9276f 3 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=605a9d3b35bebaa6899218e9e70c173b854f533966561d3c407c303858a9276f 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0Ad 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0Ad 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0Ad 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1841147 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1841147 ']' 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.110 21:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.erA 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oTj ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oTj 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LjN 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eih ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eih 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Jo6 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.j5z ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.j5z 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.MPw 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jkw ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jkw 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0Ad 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:53.370 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:53.630 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:53.630 21:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:55.011 Waiting for block devices as requested 00:34:55.272 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:55.272 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:55.532 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:55.532 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:55.793 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:55.793 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:55.793 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:55.793 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:56.053 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:56.053 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:56.053 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:56.311 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:56.311 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:56.311 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:56.570 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:56.570 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:56.570 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:57.137 No valid GPT data, bailing 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:34:57.137 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:57.397 21:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:34:57.397 00:34:57.397 Discovery Log Number of Records 2, Generation counter 2 00:34:57.397 =====Discovery Log Entry 0====== 00:34:57.397 trtype: tcp 00:34:57.397 adrfam: ipv4 00:34:57.397 subtype: current discovery subsystem 00:34:57.397 treq: not specified, sq flow control disable supported 00:34:57.397 portid: 1 00:34:57.397 trsvcid: 4420 00:34:57.397 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:57.397 traddr: 10.0.0.1 00:34:57.397 eflags: none 00:34:57.397 sectype: none 00:34:57.397 =====Discovery Log Entry 1====== 00:34:57.397 trtype: tcp 00:34:57.397 adrfam: ipv4 00:34:57.397 subtype: nvme subsystem 00:34:57.397 treq: not specified, sq flow control disable supported 00:34:57.397 portid: 1 00:34:57.397 trsvcid: 4420 00:34:57.397 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:57.397 traddr: 10.0.0.1 00:34:57.397 eflags: none 00:34:57.397 sectype: none 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.397 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.398 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.658 nvme0n1 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.658 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.659 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.918 nvme0n1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.918 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.178 nvme0n1 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.178 21:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 nvme0n1 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.438 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.699 nvme0n1 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.699 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.959 nvme0n1 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.959 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:59.219 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:59.220 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.220 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.220 21:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.479 nvme0n1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.479 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.740 nvme0n1 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.740 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.000 nvme0n1 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.000 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.261 21:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.522 nvme0n1 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.522 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.782 nvme0n1 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.782 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.783 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.353 nvme0n1 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.353 21:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.353 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.922 nvme0n1 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.922 21:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.491 nvme0n1 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.491 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.058 nvme0n1 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.059 21:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.657 nvme0n1 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:03.657 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.658 21:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.625 nvme0n1 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.625 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.626 21:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.562 nvme0n1 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.562 21:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.499 nvme0n1 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.499 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.759 21:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.696 nvme0n1 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.696 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.697 21:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.634 nvme0n1 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.634 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.893 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.894 21:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.802 nvme0n1 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.802 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.803 21:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.712 nvme0n1 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.712 21:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.089 nvme0n1 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.089 21:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.998 nvme0n1 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.998 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.258 21:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.168 nvme0n1 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.168 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.169 nvme0n1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.169 21:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.430 nvme0n1 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.430 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.690 nvme0n1 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.690 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.948 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.949 nvme0n1 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.949 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.207 nvme0n1 00:35:19.207 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.467 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.467 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.467 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.467 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.467 21:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.467 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 nvme0n1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.731 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.994 nvme0n1 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.994 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.254 nvme0n1 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.254 21:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.254 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.254 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.254 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.254 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.514 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.775 nvme0n1 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.775 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 nvme0n1 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.035 21:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.606 nvme0n1 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.606 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.607 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.176 nvme0n1 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.176 21:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 nvme0n1 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.744 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.745 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.314 nvme0n1 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.314 21:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.883 nvme0n1 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.883 21:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.823 nvme0n1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.824 21:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.204 nvme0n1 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.205 21:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.145 nvme0n1 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.145 21:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.084 nvme0n1 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.084 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.344 21:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.284 nvme0n1 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.284 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.285 21:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 nvme0n1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 21:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 nvme0n1 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 21:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.998 nvme0n1 00:35:34.998 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.998 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.999 21:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.904 nvme0n1 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:36.904 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.905 21:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.843 nvme0n1 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.843 nvme0n1 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.843 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.844 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.127 nvme0n1 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.127 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.128 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.387 nvme0n1 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:39.387 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.388 21:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.648 nvme0n1 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.648 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.907 nvme0n1 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.907 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.908 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.167 nvme0n1 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.167 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.168 21:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.426 nvme0n1 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.426 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.687 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.949 nvme0n1 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.949 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.209 nvme0n1 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.209 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.210 21:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.470 nvme0n1 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:41.730 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:41.731 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.731 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.731 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 nvme0n1 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:42.249 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.250 21:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.818 nvme0n1 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.818 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.079 nvme0n1 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.079 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.339 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.340 21:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.599 nvme0n1 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.599 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.600 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.169 nvme0n1 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:44.169 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.170 21:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.108 nvme0n1 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.108 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.368 21:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.307 nvme0n1 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.307 21:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.243 nvme0n1 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:47.243 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.244 21:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.621 nvme0n1 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.621 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.622 21:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.562 nvme0n1 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGE3ZGFiNjcwYmY3NTAwMGNhYWQwYjBkYjA0ZDFhNDFJsRMc: 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA2Yjg3ZTNiNzRjZWRhNzI3NjhjN2FlNDE0YzU3YjcxODNiMmJhYjdlZTAyY2NlNmU1MjI3ZDg5NjkxMmNmZAybHcs=: 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.562 21:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.470 nvme0n1 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.470 21:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.470 21:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.378 nvme0n1 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.378 21:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.285 nvme0n1 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJiZTI2YTcwYTRlMGE0ODM0ZjNlYTNmMjZkODVlODE0YWM2ZGM2YTRiNjNhOTVh+5/8Sw==: 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmRhMTQyNTRjMWY3OWM3ZGJjYTg4N2JiNmRlM2E3ZjPlPs84: 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.285 21:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.193 nvme0n1 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjA1YTlkM2IzNWJlYmFhNjg5OTIxOGU5ZTcwYzE3M2I4NTRmNTMzOTY2NTYxZDNjNDA3YzMwMzg1OGE5Mjc2Zo3FECY=: 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.193 21:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 nvme0n1 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 request: 00:35:59.329 { 00:35:59.329 "name": "nvme0", 00:35:59.329 "trtype": "tcp", 00:35:59.329 "traddr": "10.0.0.1", 00:35:59.329 "adrfam": "ipv4", 00:35:59.329 "trsvcid": "4420", 00:35:59.329 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.329 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.329 "prchk_reftag": false, 00:35:59.329 "prchk_guard": false, 00:35:59.329 "hdgst": false, 00:35:59.329 "ddgst": false, 00:35:59.329 "allow_unrecognized_csi": false, 00:35:59.329 "method": "bdev_nvme_attach_controller", 00:35:59.329 "req_id": 1 00:35:59.329 } 00:35:59.329 Got JSON-RPC error response 00:35:59.329 response: 00:35:59.329 { 00:35:59.329 "code": -5, 00:35:59.329 "message": "Input/output error" 00:35:59.329 } 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.329 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.330 21:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.330 request: 00:35:59.330 { 00:35:59.330 "name": "nvme0", 00:35:59.330 "trtype": "tcp", 00:35:59.330 "traddr": "10.0.0.1", 00:35:59.330 "adrfam": "ipv4", 00:35:59.330 "trsvcid": "4420", 00:35:59.330 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.330 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.330 "prchk_reftag": false, 00:35:59.330 "prchk_guard": false, 00:35:59.330 "hdgst": false, 00:35:59.330 "ddgst": false, 00:35:59.330 "dhchap_key": "key2", 00:35:59.330 "allow_unrecognized_csi": false, 00:35:59.330 "method": "bdev_nvme_attach_controller", 00:35:59.330 "req_id": 1 00:35:59.330 } 00:35:59.330 Got JSON-RPC error response 00:35:59.330 response: 00:35:59.330 { 00:35:59.330 "code": -5, 00:35:59.330 "message": "Input/output error" 00:35:59.330 } 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.330 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.590 request: 00:35:59.590 { 00:35:59.590 "name": "nvme0", 00:35:59.590 "trtype": "tcp", 00:35:59.590 "traddr": "10.0.0.1", 00:35:59.590 "adrfam": "ipv4", 00:35:59.590 "trsvcid": "4420", 00:35:59.590 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.590 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.590 "prchk_reftag": false, 00:35:59.590 "prchk_guard": false, 00:35:59.590 "hdgst": false, 00:35:59.590 "ddgst": false, 00:35:59.590 "dhchap_key": "key1", 00:35:59.590 "dhchap_ctrlr_key": "ckey2", 00:35:59.590 "allow_unrecognized_csi": false, 00:35:59.590 "method": "bdev_nvme_attach_controller", 00:35:59.590 "req_id": 1 00:35:59.590 } 00:35:59.590 Got JSON-RPC error response 00:35:59.590 response: 00:35:59.590 { 00:35:59.590 "code": -5, 00:35:59.590 "message": "Input/output error" 00:35:59.590 } 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:59.590 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:59.591 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:59.591 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:59.591 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.591 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.850 nvme0n1 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.850 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.109 request: 00:36:00.109 { 00:36:00.109 "name": "nvme0", 00:36:00.109 "dhchap_key": "key1", 00:36:00.109 "dhchap_ctrlr_key": "ckey2", 00:36:00.109 "method": "bdev_nvme_set_keys", 00:36:00.109 "req_id": 1 00:36:00.109 } 00:36:00.109 Got JSON-RPC error response 00:36:00.109 response: 00:36:00.109 { 00:36:00.109 "code": -13, 00:36:00.109 "message": "Permission denied" 00:36:00.109 } 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:00.109 21:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:01.045 21:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDdiZGZkYzMyMmFiMDk4ZjY5OTUxODA4OGE0YzIzZWFjYWQ2ZGJjYTA2MWFkODRku26Lhg==: 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: ]] 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGE1ZDJiOGFhMTI0MDJiMGE1ODMwM2M3N2U0N2YwOWE5YWE1YTc4YjhiODFkOWVmiUu2lw==: 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.422 21:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.422 nvme0n1 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEzOTRjZjMxZWQyMjZkYTI0NDVhNmEzZDM4ODM1MDXsUv4T: 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: ]] 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5MjJkYzllMzM0YTEwOWU5ZDgwYWFmMDVhNTQ2MjCpeFFb: 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.422 request: 00:36:02.422 { 00:36:02.422 "name": "nvme0", 00:36:02.422 "dhchap_key": "key2", 00:36:02.422 "dhchap_ctrlr_key": "ckey1", 00:36:02.422 "method": "bdev_nvme_set_keys", 00:36:02.422 "req_id": 1 00:36:02.422 } 00:36:02.422 Got JSON-RPC error response 00:36:02.422 response: 00:36:02.422 { 00:36:02.422 "code": -13, 00:36:02.422 "message": "Permission denied" 00:36:02.422 } 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.422 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.423 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:02.423 21:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.802 rmmod nvme_tcp 00:36:03.802 rmmod nvme_fabrics 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:03.802 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1841147 ']' 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1841147 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1841147 ']' 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1841147 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1841147 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1841147' 00:36:03.803 killing process with pid 1841147 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1841147 00:36:03.803 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1841147 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.061 21:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:06.602 21:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:07.986 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:07.986 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:07.987 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:07.987 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:07.987 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:07.987 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:08.247 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:08.247 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:08.247 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:08.247 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:08.247 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:09.189 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:36:09.189 21:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.erA /tmp/spdk.key-null.LjN /tmp/spdk.key-sha256.Jo6 /tmp/spdk.key-sha384.MPw /tmp/spdk.key-sha512.0Ad /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:09.189 21:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:10.572 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:10.573 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:10.573 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:10.573 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:10.573 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:10.573 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:10.573 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:10.573 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:10.573 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:10.573 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:10.573 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:10.573 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:10.573 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:10.573 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:10.573 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:10.573 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:10.573 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:10.834 00:36:10.834 real 1m23.340s 00:36:10.834 user 1m22.226s 00:36:10.834 sys 0m9.219s 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.834 ************************************ 00:36:10.834 END TEST nvmf_auth_host 00:36:10.834 ************************************ 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.834 ************************************ 00:36:10.834 START TEST nvmf_digest 00:36:10.834 ************************************ 00:36:10.834 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.834 * Looking for test storage... 00:36:11.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:11.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.094 --rc genhtml_branch_coverage=1 00:36:11.094 --rc genhtml_function_coverage=1 00:36:11.094 --rc genhtml_legend=1 00:36:11.094 --rc geninfo_all_blocks=1 00:36:11.094 --rc geninfo_unexecuted_blocks=1 00:36:11.094 00:36:11.094 ' 00:36:11.094 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:11.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.094 --rc genhtml_branch_coverage=1 00:36:11.094 --rc genhtml_function_coverage=1 00:36:11.094 --rc genhtml_legend=1 00:36:11.094 --rc geninfo_all_blocks=1 00:36:11.095 --rc geninfo_unexecuted_blocks=1 00:36:11.095 00:36:11.095 ' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.095 --rc genhtml_branch_coverage=1 00:36:11.095 --rc genhtml_function_coverage=1 00:36:11.095 --rc genhtml_legend=1 00:36:11.095 --rc geninfo_all_blocks=1 00:36:11.095 --rc geninfo_unexecuted_blocks=1 00:36:11.095 00:36:11.095 ' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.095 --rc genhtml_branch_coverage=1 00:36:11.095 --rc genhtml_function_coverage=1 00:36:11.095 --rc genhtml_legend=1 00:36:11.095 --rc geninfo_all_blocks=1 00:36:11.095 --rc geninfo_unexecuted_blocks=1 00:36:11.095 00:36:11.095 ' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.095 21:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.449 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:14.450 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:14.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:14.450 Found net devices under 0000:84:00.0: cvl_0_0 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:14.450 Found net devices under 0000:84:00.1: cvl_0_1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:36:14.450 00:36:14.450 --- 10.0.0.2 ping statistics --- 00:36:14.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.450 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:14.450 00:36:14.450 --- 10.0.0.1 ping statistics --- 00:36:14.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.450 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:36:14.450 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.451 ************************************ 00:36:14.451 START TEST nvmf_digest_clean 00:36:14.451 ************************************ 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1854739 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1854739 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1854739 ']' 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:14.451 21:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.451 [2024-10-08 21:03:43.012807] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:14.451 [2024-10-08 21:03:43.012906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.451 [2024-10-08 21:03:43.123899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.728 [2024-10-08 21:03:43.317127] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.728 [2024-10-08 21:03:43.317256] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.728 [2024-10-08 21:03:43.317293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.728 [2024-10-08 21:03:43.317322] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.728 [2024-10-08 21:03:43.317348] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.728 [2024-10-08 21:03:43.318753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.676 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.677 null0 00:36:15.677 [2024-10-08 21:03:44.357125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.677 [2024-10-08 21:03:44.381479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1854892 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1854892 /var/tmp/bperf.sock 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1854892 ']' 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.677 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.936 [2024-10-08 21:03:44.488461] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:15.936 [2024-10-08 21:03:44.488618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854892 ] 00:36:15.937 [2024-10-08 21:03:44.628912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.195 [2024-10-08 21:03:44.845316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.454 21:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:16.454 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:16.454 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:16.454 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:16.454 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.023 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.023 21:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.592 nvme0n1 00:36:17.592 21:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.592 21:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.851 Running I/O for 2 seconds... 00:36:20.181 7733.00 IOPS, 30.21 MiB/s [2024-10-08T19:03:48.944Z] 7628.00 IOPS, 29.80 MiB/s 00:36:20.181 Latency(us) 00:36:20.181 [2024-10-08T19:03:48.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:20.181 nvme0n1 : 2.01 7652.42 29.89 0.00 0.00 16689.39 7039.05 35535.08 00:36:20.181 [2024-10-08T19:03:48.944Z] =================================================================================================================== 00:36:20.181 [2024-10-08T19:03:48.944Z] Total : 7652.42 29.89 0.00 0.00 16689.39 7039.05 35535.08 00:36:20.181 { 00:36:20.181 "results": [ 00:36:20.181 { 00:36:20.181 "job": "nvme0n1", 00:36:20.181 "core_mask": "0x2", 00:36:20.181 "workload": "randread", 00:36:20.181 "status": "finished", 00:36:20.181 "queue_depth": 128, 00:36:20.181 "io_size": 4096, 00:36:20.181 "runtime": 2.010345, 00:36:20.181 "iops": 7652.417868574797, 00:36:20.181 "mibps": 29.8922572991203, 00:36:20.181 "io_failed": 0, 00:36:20.181 "io_timeout": 0, 00:36:20.181 "avg_latency_us": 16689.387523352787, 00:36:20.181 "min_latency_us": 7039.051851851852, 00:36:20.181 "max_latency_us": 35535.07555555556 00:36:20.181 } 00:36:20.181 ], 00:36:20.181 "core_count": 1 00:36:20.181 } 00:36:20.181 21:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:20.181 21:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:20.181 21:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:20.181 21:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.181 21:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:20.181 | select(.opcode=="crc32c") 00:36:20.181 | "\(.module_name) \(.executed)"' 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1854892 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1854892 ']' 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1854892 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1854892 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1854892' 00:36:20.750 killing process with pid 1854892 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1854892 00:36:20.750 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.750 00:36:20.750 Latency(us) 00:36:20.750 [2024-10-08T19:03:49.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.750 [2024-10-08T19:03:49.513Z] =================================================================================================================== 00:36:20.750 [2024-10-08T19:03:49.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.750 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1854892 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:21.010 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1855521 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1855521 /var/tmp/bperf.sock 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1855521 ']' 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:21.011 21:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.271 [2024-10-08 21:03:49.829228] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:21.271 [2024-10-08 21:03:49.829347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855521 ] 00:36:21.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:21.271 Zero copy mechanism will not be used. 00:36:21.271 [2024-10-08 21:03:49.940881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.531 [2024-10-08 21:03:50.153008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.471 21:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:22.471 21:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:22.471 21:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.471 21:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.471 21:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.040 21:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.040 21:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.300 nvme0n1 00:36:23.300 21:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.300 21:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.561 Zero copy mechanism will not be used. 00:36:23.561 Running I/O for 2 seconds... 00:36:25.441 2688.00 IOPS, 336.00 MiB/s [2024-10-08T19:03:54.204Z] 2658.00 IOPS, 332.25 MiB/s 00:36:25.441 Latency(us) 00:36:25.441 [2024-10-08T19:03:54.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:25.441 nvme0n1 : 2.01 2655.95 331.99 0.00 0.00 6015.10 2075.31 10194.49 00:36:25.441 [2024-10-08T19:03:54.204Z] =================================================================================================================== 00:36:25.441 [2024-10-08T19:03:54.204Z] Total : 2655.95 331.99 0.00 0.00 6015.10 2075.31 10194.49 00:36:25.441 { 00:36:25.441 "results": [ 00:36:25.441 { 00:36:25.441 "job": "nvme0n1", 00:36:25.441 "core_mask": "0x2", 00:36:25.441 "workload": "randread", 00:36:25.441 "status": "finished", 00:36:25.441 "queue_depth": 16, 00:36:25.441 "io_size": 131072, 00:36:25.441 "runtime": 2.007567, 00:36:25.441 "iops": 2655.9512086022532, 00:36:25.441 "mibps": 331.99390107528166, 00:36:25.441 "io_failed": 0, 00:36:25.441 "io_timeout": 0, 00:36:25.441 "avg_latency_us": 6015.097374343586, 00:36:25.441 "min_latency_us": 2075.306666666667, 00:36:25.441 "max_latency_us": 10194.488888888889 00:36:25.441 } 00:36:25.441 ], 00:36:25.441 "core_count": 1 00:36:25.441 } 00:36:25.441 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:25.441 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:25.441 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:25.441 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:25.441 | select(.opcode=="crc32c") 00:36:25.441 | "\(.module_name) \(.executed)"' 00:36:25.441 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1855521 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1855521 ']' 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1855521 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:26.011 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1855521 00:36:26.272 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:26.272 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:26.272 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1855521' 00:36:26.272 killing process with pid 1855521 00:36:26.272 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1855521 00:36:26.272 Received shutdown signal, test time was about 2.000000 seconds 00:36:26.272 00:36:26.272 Latency(us) 00:36:26.272 [2024-10-08T19:03:55.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.272 [2024-10-08T19:03:55.035Z] =================================================================================================================== 00:36:26.272 [2024-10-08T19:03:55.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.272 21:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1855521 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1856094 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1856094 /var/tmp/bperf.sock 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1856094 ']' 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:26.530 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.530 [2024-10-08 21:03:55.281017] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:26.530 [2024-10-08 21:03:55.281118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856094 ] 00:36:26.789 [2024-10-08 21:03:55.349552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.789 [2024-10-08 21:03:55.477222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.048 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:27.048 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:27.048 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.048 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.048 21:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:27.308 21:03:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:27.308 21:03:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.249 nvme0n1 00:36:28.250 21:03:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.250 21:03:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.250 Running I/O for 2 seconds... 00:36:30.575 9307.00 IOPS, 36.36 MiB/s [2024-10-08T19:03:59.338Z] 9022.50 IOPS, 35.24 MiB/s 00:36:30.575 Latency(us) 00:36:30.575 [2024-10-08T19:03:59.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.575 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.575 nvme0n1 : 2.02 9007.91 35.19 0.00 0.00 14169.60 4344.79 22719.15 00:36:30.575 [2024-10-08T19:03:59.338Z] =================================================================================================================== 00:36:30.575 [2024-10-08T19:03:59.338Z] Total : 9007.91 35.19 0.00 0.00 14169.60 4344.79 22719.15 00:36:30.575 { 00:36:30.575 "results": [ 00:36:30.575 { 00:36:30.575 "job": "nvme0n1", 00:36:30.575 "core_mask": "0x2", 00:36:30.575 "workload": "randwrite", 00:36:30.575 "status": "finished", 00:36:30.575 "queue_depth": 128, 00:36:30.575 "io_size": 4096, 00:36:30.575 "runtime": 2.017449, 00:36:30.575 "iops": 9007.910484973845, 00:36:30.575 "mibps": 35.18715033192908, 00:36:30.575 "io_failed": 0, 00:36:30.575 "io_timeout": 0, 00:36:30.575 "avg_latency_us": 14169.596405249138, 00:36:30.575 "min_latency_us": 4344.794074074074, 00:36:30.575 "max_latency_us": 22719.146666666667 00:36:30.575 } 00:36:30.575 ], 00:36:30.575 "core_count": 1 00:36:30.575 } 00:36:30.575 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:30.575 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:30.575 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:30.575 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:30.575 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:30.575 | select(.opcode=="crc32c") 00:36:30.575 | "\(.module_name) \(.executed)"' 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1856094 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1856094 ']' 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1856094 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856094 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856094' 00:36:30.834 killing process with pid 1856094 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1856094 00:36:30.834 Received shutdown signal, test time was about 2.000000 seconds 00:36:30.834 00:36:30.834 Latency(us) 00:36:30.834 [2024-10-08T19:03:59.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.834 [2024-10-08T19:03:59.597Z] =================================================================================================================== 00:36:30.834 [2024-10-08T19:03:59.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.834 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1856094 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1856622 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1856622 /var/tmp/bperf.sock 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1856622 ']' 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:31.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:31.405 21:03:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:31.405 [2024-10-08 21:04:00.031081] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:31.405 [2024-10-08 21:04:00.031179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856622 ] 00:36:31.405 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:31.405 Zero copy mechanism will not be used. 00:36:31.406 [2024-10-08 21:04:00.162000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.666 [2024-10-08 21:04:00.382973] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.926 21:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:31.926 21:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:31.926 21:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:31.926 21:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:31.926 21:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:32.867 21:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:32.868 21:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.437 nvme0n1 00:36:33.437 21:04:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:33.437 21:04:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:33.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:33.697 Zero copy mechanism will not be used. 00:36:33.697 Running I/O for 2 seconds... 00:36:36.016 2494.00 IOPS, 311.75 MiB/s [2024-10-08T19:04:04.779Z] 3256.00 IOPS, 407.00 MiB/s 00:36:36.016 Latency(us) 00:36:36.016 [2024-10-08T19:04:04.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.016 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:36.016 nvme0n1 : 2.01 3257.30 407.16 0.00 0.00 4899.23 2912.71 10534.31 00:36:36.016 [2024-10-08T19:04:04.779Z] =================================================================================================================== 00:36:36.016 [2024-10-08T19:04:04.779Z] Total : 3257.30 407.16 0.00 0.00 4899.23 2912.71 10534.31 00:36:36.016 { 00:36:36.016 "results": [ 00:36:36.016 { 00:36:36.016 "job": "nvme0n1", 00:36:36.016 "core_mask": "0x2", 00:36:36.016 "workload": "randwrite", 00:36:36.016 "status": "finished", 00:36:36.016 "queue_depth": 16, 00:36:36.016 "io_size": 131072, 00:36:36.016 "runtime": 2.005032, 00:36:36.016 "iops": 3257.3046215721247, 00:36:36.016 "mibps": 407.1630776965156, 00:36:36.016 "io_failed": 0, 00:36:36.016 "io_timeout": 0, 00:36:36.016 "avg_latency_us": 4899.230113249063, 00:36:36.016 "min_latency_us": 2912.711111111111, 00:36:36.016 "max_latency_us": 10534.305185185185 00:36:36.016 } 00:36:36.016 ], 00:36:36.016 "core_count": 1 00:36:36.016 } 00:36:36.016 21:04:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:36.016 21:04:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:36.016 21:04:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:36.016 21:04:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:36.016 21:04:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:36.016 | select(.opcode=="crc32c") 00:36:36.016 | "\(.module_name) \(.executed)"' 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1856622 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1856622 ']' 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1856622 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856622 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856622' 00:36:36.582 killing process with pid 1856622 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1856622 00:36:36.582 Received shutdown signal, test time was about 2.000000 seconds 00:36:36.582 00:36:36.582 Latency(us) 00:36:36.582 [2024-10-08T19:04:05.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.582 [2024-10-08T19:04:05.345Z] =================================================================================================================== 00:36:36.582 [2024-10-08T19:04:05.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.582 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1856622 00:36:37.151 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1854739 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1854739 ']' 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1854739 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1854739 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1854739' 00:36:37.152 killing process with pid 1854739 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1854739 00:36:37.152 21:04:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1854739 00:36:37.410 00:36:37.410 real 0m23.135s 00:36:37.410 user 0m48.408s 00:36:37.410 sys 0m5.978s 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 ************************************ 00:36:37.410 END TEST nvmf_digest_clean 00:36:37.410 ************************************ 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 ************************************ 00:36:37.410 START TEST nvmf_digest_error 00:36:37.410 ************************************ 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1857325 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1857325 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1857325 ']' 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:37.410 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.668 [2024-10-08 21:04:06.209298] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:37.668 [2024-10-08 21:04:06.209394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.668 [2024-10-08 21:04:06.282760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.668 [2024-10-08 21:04:06.398515] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.668 [2024-10-08 21:04:06.398579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.668 [2024-10-08 21:04:06.398607] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.668 [2024-10-08 21:04:06.398618] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.668 [2024-10-08 21:04:06.398628] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.668 [2024-10-08 21:04:06.399383] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.926 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:37.926 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:37.926 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:37.926 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:37.926 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.187 [2024-10-08 21:04:06.712647] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.187 null0 00:36:38.187 [2024-10-08 21:04:06.833389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.187 [2024-10-08 21:04:06.857606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1857462 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1857462 /var/tmp/bperf.sock 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1857462 ']' 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:38.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:38.187 21:04:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.187 [2024-10-08 21:04:06.914983] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:38.187 [2024-10-08 21:04:06.915069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857462 ] 00:36:38.447 [2024-10-08 21:04:07.026219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.707 [2024-10-08 21:04:07.249499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.707 21:04:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:38.707 21:04:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:38.707 21:04:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:38.707 21:04:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.647 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.907 nvme0n1 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:39.907 21:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:40.167 Running I/O for 2 seconds... 00:36:40.167 [2024-10-08 21:04:08.878957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.167 [2024-10-08 21:04:08.879075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.167 [2024-10-08 21:04:08.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.167 [2024-10-08 21:04:08.906528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.167 [2024-10-08 21:04:08.906609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.167 [2024-10-08 21:04:08.906671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:08.944432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:08.944511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:08.944554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:08.984146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:08.984240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:08.984285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.017968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.018044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.018086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.057299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.057380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.057424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.086212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.086290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.086333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.117493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.117570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.117613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.147698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.147775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.147819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.427 [2024-10-08 21:04:09.178447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.427 [2024-10-08 21:04:09.178525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.427 [2024-10-08 21:04:09.178568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.687 [2024-10-08 21:04:09.209774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.687 [2024-10-08 21:04:09.209851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.687 [2024-10-08 21:04:09.209894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.687 [2024-10-08 21:04:09.245541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.687 [2024-10-08 21:04:09.245619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.687 [2024-10-08 21:04:09.245706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.687 [2024-10-08 21:04:09.276541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.687 [2024-10-08 21:04:09.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.276673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.688 [2024-10-08 21:04:09.300969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.688 [2024-10-08 21:04:09.301004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.301023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.688 [2024-10-08 21:04:09.330829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.688 [2024-10-08 21:04:09.330865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.330884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.688 [2024-10-08 21:04:09.359176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.688 [2024-10-08 21:04:09.359254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.359297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.688 [2024-10-08 21:04:09.386648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.688 [2024-10-08 21:04:09.386743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.386786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.688 [2024-10-08 21:04:09.420875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.688 [2024-10-08 21:04:09.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.688 [2024-10-08 21:04:09.420995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.452430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.452511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.452557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.481163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.481244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.481287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.513667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.513756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.513800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.544599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.544688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.544734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.575455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.575530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.575572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.604238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.604357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.635037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.635114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.635158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.664329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.664406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.664449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.948 [2024-10-08 21:04:09.698042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:40.948 [2024-10-08 21:04:09.698118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.948 [2024-10-08 21:04:09.698161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.728202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.728281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.728322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.759514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.759557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.759580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.778291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.778333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.778358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.804269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.804347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.804388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 8075.00 IOPS, 31.54 MiB/s [2024-10-08T19:04:09.971Z] [2024-10-08 21:04:09.834895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.834978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.835022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.874128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.874207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.874250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.910170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.910250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.910293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.208 [2024-10-08 21:04:09.945896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.208 [2024-10-08 21:04:09.945973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.208 [2024-10-08 21:04:09.946016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:09.973805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:09.973839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:09.973858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.007616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.007714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.007762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.037686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.037784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.037848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.066250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.066376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.090725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.090804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.090847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.130133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.130214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.130257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.174392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.174471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.174514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.469 [2024-10-08 21:04:10.211710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.469 [2024-10-08 21:04:10.211789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.469 [2024-10-08 21:04:10.211843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.728 [2024-10-08 21:04:10.243852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.728 [2024-10-08 21:04:10.243929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.728 [2024-10-08 21:04:10.243972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.728 [2024-10-08 21:04:10.283000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.728 [2024-10-08 21:04:10.283090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.728 [2024-10-08 21:04:10.283133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.728 [2024-10-08 21:04:10.322198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.728 [2024-10-08 21:04:10.322278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.729 [2024-10-08 21:04:10.322320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.729 [2024-10-08 21:04:10.348733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.729 [2024-10-08 21:04:10.348812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.729 [2024-10-08 21:04:10.348855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.729 [2024-10-08 21:04:10.386848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.729 [2024-10-08 21:04:10.386927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.729 [2024-10-08 21:04:10.386970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.729 [2024-10-08 21:04:10.423026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.729 [2024-10-08 21:04:10.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.729 [2024-10-08 21:04:10.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.729 [2024-10-08 21:04:10.463773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.729 [2024-10-08 21:04:10.463850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.729 [2024-10-08 21:04:10.463894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.988 [2024-10-08 21:04:10.503525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.988 [2024-10-08 21:04:10.503602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.988 [2024-10-08 21:04:10.503644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.988 [2024-10-08 21:04:10.544022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.988 [2024-10-08 21:04:10.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.988 [2024-10-08 21:04:10.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.988 [2024-10-08 21:04:10.584503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.988 [2024-10-08 21:04:10.584580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.988 [2024-10-08 21:04:10.584624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.988 [2024-10-08 21:04:10.624965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.988 [2024-10-08 21:04:10.625043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.988 [2024-10-08 21:04:10.625085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.988 [2024-10-08 21:04:10.665387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.988 [2024-10-08 21:04:10.665466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.988 [2024-10-08 21:04:10.665523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.989 [2024-10-08 21:04:10.705902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.989 [2024-10-08 21:04:10.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.989 [2024-10-08 21:04:10.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.989 [2024-10-08 21:04:10.747708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:41.989 [2024-10-08 21:04:10.747785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.989 [2024-10-08 21:04:10.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.247 [2024-10-08 21:04:10.785817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:42.247 [2024-10-08 21:04:10.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.247 [2024-10-08 21:04:10.785937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.247 [2024-10-08 21:04:10.826142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fccd70) 00:36:42.247 [2024-10-08 21:04:10.826220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.247 [2024-10-08 21:04:10.826262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.247 7532.50 IOPS, 29.42 MiB/s 00:36:42.247 Latency(us) 00:36:42.247 [2024-10-08T19:04:11.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.247 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:42.247 nvme0n1 : 2.06 7386.03 28.85 0.00 0.00 16956.38 7767.23 78837.38 00:36:42.247 [2024-10-08T19:04:11.010Z] =================================================================================================================== 00:36:42.247 [2024-10-08T19:04:11.010Z] Total : 7386.03 28.85 0.00 0.00 16956.38 7767.23 78837.38 00:36:42.247 { 00:36:42.247 "results": [ 00:36:42.247 { 00:36:42.247 "job": "nvme0n1", 00:36:42.247 "core_mask": "0x2", 00:36:42.247 "workload": "randread", 00:36:42.247 "status": "finished", 00:36:42.247 "queue_depth": 128, 00:36:42.247 "io_size": 4096, 00:36:42.247 "runtime": 2.056991, 00:36:42.247 "iops": 7386.031343841562, 00:36:42.247 "mibps": 28.851684936881103, 00:36:42.247 "io_failed": 0, 00:36:42.247 "io_timeout": 0, 00:36:42.247 "avg_latency_us": 16956.384574377575, 00:36:42.247 "min_latency_us": 7767.22962962963, 00:36:42.247 "max_latency_us": 78837.38074074074 00:36:42.247 } 00:36:42.247 ], 00:36:42.247 "core_count": 1 00:36:42.247 } 00:36:42.247 21:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:42.247 21:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:42.247 21:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:42.247 21:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:42.247 | .driver_specific 00:36:42.247 | .nvme_error 00:36:42.247 | .status_code 00:36:42.247 | .command_transient_transport_error' 00:36:42.506 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 59 > 0 )) 00:36:42.506 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1857462 00:36:42.506 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1857462 ']' 00:36:42.506 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1857462 00:36:42.506 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857462 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857462' 00:36:42.765 killing process with pid 1857462 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1857462 00:36:42.765 Received shutdown signal, test time was about 2.000000 seconds 00:36:42.765 00:36:42.765 Latency(us) 00:36:42.765 [2024-10-08T19:04:11.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.765 [2024-10-08T19:04:11.528Z] =================================================================================================================== 00:36:42.765 [2024-10-08T19:04:11.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.765 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1857462 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1858014 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1858014 /var/tmp/bperf.sock 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1858014 ']' 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.023 21:04:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.023 [2024-10-08 21:04:11.742410] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:43.023 [2024-10-08 21:04:11.742518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858014 ] 00:36:43.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:43.023 Zero copy mechanism will not be used. 00:36:43.283 [2024-10-08 21:04:11.850040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.542 [2024-10-08 21:04:12.071469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.802 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.802 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:43.802 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:43.802 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:44.371 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:44.371 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.371 21:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.371 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.371 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.371 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.940 nvme0n1 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:44.940 21:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:45.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:45.242 Zero copy mechanism will not be used. 00:36:45.242 Running I/O for 2 seconds... 00:36:45.242 [2024-10-08 21:04:13.770028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.770134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.770184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.781413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.781492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.793143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.793265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.805673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.805766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.805813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.817632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.817724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.829918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.829997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.830041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.841525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.841604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.841666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.853822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.853901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.853945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.866398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.866478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.866521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.880308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.880385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.880430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.890337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.890412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.890455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.902014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.902088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.902130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.913486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.913559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.913602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.924964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.925080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.936540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.936614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.936672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.948221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.948294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.948336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.959696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.959770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.971345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.971420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.982959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.983033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.983075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.242 [2024-10-08 21:04:13.994507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.242 [2024-10-08 21:04:13.994581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.242 [2024-10-08 21:04:13.994624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.006344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.006416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.006471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.018143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.018217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.029635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.029723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.029767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.041177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.041251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.041292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.052790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.052863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.052905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.066033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.066106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.066148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.076016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.076089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.076131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.087684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.087759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.087803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.099393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.099470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.099515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.111058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.111133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.111175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.122866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.122940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.122983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.134338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.134453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.146116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.146188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.146229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.158205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.158278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.158320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.169886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.169960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.170003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.181588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.181728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.195248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.195322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.195365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.206253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.206329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.206385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.219097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.219212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.232109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.232183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.232224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.245635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.245729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.245773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.257990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.258065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.258106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.520 [2024-10-08 21:04:14.270443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.520 [2024-10-08 21:04:14.270518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.520 [2024-10-08 21:04:14.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.282732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.282807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.282849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.295097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.295172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.295215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.304434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.304511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.304554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.314560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.314674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.324166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.324242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.324286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.334032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.334105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.334147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.344616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.344711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.344756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.357307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.357381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.370197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.370271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.370313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.380989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.381030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.381053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.389793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.389838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.389862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.400374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.400493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.410823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.410865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.410888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.422139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.422214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.422256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.433812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.433888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.433930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.443891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.443928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.443962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.455891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.455950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.455996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.468100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.468180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.468224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.478817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.478871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.488884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.488920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.488940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:45.782 [2024-10-08 21:04:14.499947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.782 [2024-10-08 21:04:14.500025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.782 [2024-10-08 21:04:14.500086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.783 [2024-10-08 21:04:14.510163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.783 [2024-10-08 21:04:14.510239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.783 [2024-10-08 21:04:14.510281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.783 [2024-10-08 21:04:14.521200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.783 [2024-10-08 21:04:14.521243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.783 [2024-10-08 21:04:14.521266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:45.783 [2024-10-08 21:04:14.533449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:45.783 [2024-10-08 21:04:14.533528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.783 [2024-10-08 21:04:14.533573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.545867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.545932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.545977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.558541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.558620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.558684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.570773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.570838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.582956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.583032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.583074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.595357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.595431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.595473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.608063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.608155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.608201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.621230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.621308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.621351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.634741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.634817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.634862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.646824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.646906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.646949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.659067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.659142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.659184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.672154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.672229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.680843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.680918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.680959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.690866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.690942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.690985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.701596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.701734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.713603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.713691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.713738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.726521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.726597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.726641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.739249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.739326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.739370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.752277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.752354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.752398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.044 2650.00 IOPS, 331.25 MiB/s [2024-10-08T19:04:14.807Z] [2024-10-08 21:04:14.770062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.770188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.782916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.782991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.783034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.044 [2024-10-08 21:04:14.794869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.044 [2024-10-08 21:04:14.794943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.044 [2024-10-08 21:04:14.794987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.305 [2024-10-08 21:04:14.806982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.305 [2024-10-08 21:04:14.807057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.305 [2024-10-08 21:04:14.807100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.305 [2024-10-08 21:04:14.819288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.305 [2024-10-08 21:04:14.819362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.819420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.831245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.831319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.831360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.842686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.842761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.842805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.850112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.850185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.850227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.862484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.862562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.862605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.875171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.875247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.875289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.887311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.887386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.887428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.899388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.899463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.899504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.911536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.911612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.911671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.923676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.923749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.923790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.936025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.936099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.936141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.948177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.948250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.948291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.960622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.960712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.960755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.972778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.972852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.972894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.984371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.984445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.984486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:14.996520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:14.996595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:14.996638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:15.008426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:15.008499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:15.008541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:15.020460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:15.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:15.020591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:15.032573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:15.032647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:15.032708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:15.044583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:15.044672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:15.044718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.306 [2024-10-08 21:04:15.056627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.306 [2024-10-08 21:04:15.056720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.306 [2024-10-08 21:04:15.056765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.069000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.069073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.069116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.081463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.081537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.081578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.093426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.093503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.093546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.105610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.105744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.119388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.119468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.119511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.132636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.132740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.132785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.145048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.145124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.145166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.158070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.158145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.158187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.170383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.170458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.170499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.182340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.182415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.182457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.194622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.194716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.194760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.206287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.206364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.206407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.216336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.216371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.216390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.224223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.224301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.224344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.234388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.234462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.245021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.245100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.245144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.256146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.256221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.256264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.269198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.269275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.269318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.282137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.282213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.295566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.295643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.295707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.307957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.308031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.308073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.568 [2024-10-08 21:04:15.321204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.568 [2024-10-08 21:04:15.321281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.568 [2024-10-08 21:04:15.321326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.335222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.335300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.335360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.348641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.348731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.348775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.361214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.361288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.361331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.373681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.373758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.373803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.385484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.385602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.397548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.397628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.397701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.408387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.408508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.418049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.418125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.418167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.427858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.427892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.427911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.437891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.437930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.437950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.449105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.449180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.449222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.460964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.461038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.461080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.474684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.474761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.474805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.487314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.487391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.487434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.500646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.500740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.500783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.513029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.513106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.513149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.525218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.525294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.525336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.537080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.537153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.537194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.549486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.549563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.549606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.561497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.561574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.561617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.573442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.573521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.573565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.828 [2024-10-08 21:04:15.584450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:46.828 [2024-10-08 21:04:15.584486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.828 [2024-10-08 21:04:15.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.592929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.593014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.593060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.604707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.604745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.614820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.614855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.614874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.624790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.624825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.624844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.634393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.634486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.634531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.644179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.644255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.644297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.653917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.653989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.654031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.663721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.663755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.663774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.672070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.672127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.672172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.681932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.681989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.682012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.691364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.691440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.691482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.700269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.700303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.700322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.710043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.710163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.719958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.720032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.720075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.729963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.730039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.730096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.741504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.741581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.741623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.089 [2024-10-08 21:04:15.754217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff9e00) 00:36:47.089 [2024-10-08 21:04:15.754296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.089 [2024-10-08 21:04:15.754355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.089 2652.50 IOPS, 331.56 MiB/s 00:36:47.089 Latency(us) 00:36:47.089 [2024-10-08T19:04:15.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:47.089 nvme0n1 : 2.01 2653.16 331.64 0.00 0.00 6020.48 2269.49 17864.63 00:36:47.089 [2024-10-08T19:04:15.852Z] =================================================================================================================== 00:36:47.089 [2024-10-08T19:04:15.852Z] Total : 2653.16 331.64 0.00 0.00 6020.48 2269.49 17864.63 00:36:47.089 { 00:36:47.089 "results": [ 00:36:47.089 { 00:36:47.089 "job": "nvme0n1", 00:36:47.089 "core_mask": "0x2", 00:36:47.089 "workload": "randread", 00:36:47.089 "status": "finished", 00:36:47.089 "queue_depth": 16, 00:36:47.089 "io_size": 131072, 00:36:47.089 "runtime": 2.005534, 00:36:47.089 "iops": 2653.1587098498453, 00:36:47.089 "mibps": 331.64483873123066, 00:36:47.089 "io_failed": 0, 00:36:47.089 "io_timeout": 0, 00:36:47.089 "avg_latency_us": 6020.483363333264, 00:36:47.089 "min_latency_us": 2269.4874074074073, 00:36:47.089 "max_latency_us": 17864.62814814815 00:36:47.089 } 00:36:47.089 ], 00:36:47.089 "core_count": 1 00:36:47.089 } 00:36:47.089 21:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:47.089 21:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:47.089 21:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:47.089 21:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:47.089 | .driver_specific 00:36:47.089 | .nvme_error 00:36:47.089 | .status_code 00:36:47.089 | .command_transient_transport_error' 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1858014 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1858014 ']' 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1858014 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1858014 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1858014' 00:36:47.660 killing process with pid 1858014 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1858014 00:36:47.660 Received shutdown signal, test time was about 2.000000 seconds 00:36:47.660 00:36:47.660 Latency(us) 00:36:47.660 [2024-10-08T19:04:16.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.660 [2024-10-08T19:04:16.423Z] =================================================================================================================== 00:36:47.660 [2024-10-08T19:04:16.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:47.660 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1858014 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1858555 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1858555 /var/tmp/bperf.sock 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1858555 ']' 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.920 21:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.181 [2024-10-08 21:04:16.752106] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:48.181 [2024-10-08 21:04:16.752278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858555 ] 00:36:48.181 [2024-10-08 21:04:16.861752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.441 [2024-10-08 21:04:17.074693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.701 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.701 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:48.701 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.701 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.271 21:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:50.209 nvme0n1 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:50.209 21:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:50.209 Running I/O for 2 seconds... 00:36:50.209 [2024-10-08 21:04:18.906989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f6458 00:36:50.209 [2024-10-08 21:04:18.909742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.209 [2024-10-08 21:04:18.909833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:50.209 [2024-10-08 21:04:18.943139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e4de8 00:36:50.209 [2024-10-08 21:04:18.947519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.209 [2024-10-08 21:04:18.947594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:50.209 [2024-10-08 21:04:18.964502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e01f8 00:36:50.209 [2024-10-08 21:04:18.966517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.209 [2024-10-08 21:04:18.966591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:18.997792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f1868 00:36:50.467 [2024-10-08 21:04:19.001114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.467 [2024-10-08 21:04:19.001188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:19.024918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fc998 00:36:50.467 [2024-10-08 21:04:19.027272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.467 [2024-10-08 21:04:19.027345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:19.055263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198efae0 00:36:50.467 [2024-10-08 21:04:19.058270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.467 [2024-10-08 21:04:19.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:19.090433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e2c28 00:36:50.467 [2024-10-08 21:04:19.095118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.467 [2024-10-08 21:04:19.095193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:19.111034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198eb328 00:36:50.467 [2024-10-08 21:04:19.113082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.467 [2024-10-08 21:04:19.113153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.467 [2024-10-08 21:04:19.138760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ef270 00:36:50.468 [2024-10-08 21:04:19.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.468 [2024-10-08 21:04:19.140286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.468 [2024-10-08 21:04:19.168246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fc128 00:36:50.468 [2024-10-08 21:04:19.169793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.468 [2024-10-08 21:04:19.169825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:50.468 [2024-10-08 21:04:19.189985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f6020 00:36:50.468 [2024-10-08 21:04:19.192110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.468 [2024-10-08 21:04:19.192184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:50.468 [2024-10-08 21:04:19.212093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f6020 00:36:50.468 [2024-10-08 21:04:19.214237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.468 [2024-10-08 21:04:19.214309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.232840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f46d0 00:36:50.727 [2024-10-08 21:04:19.234618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.234722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.264086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e5ec8 00:36:50.727 [2024-10-08 21:04:19.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.268481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.288232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e23b8 00:36:50.727 [2024-10-08 21:04:19.290931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.291002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.325973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fb8b8 00:36:50.727 [2024-10-08 21:04:19.330712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.330784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.355008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ef6a8 00:36:50.727 [2024-10-08 21:04:19.359428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.359503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.375426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198eee38 00:36:50.727 [2024-10-08 21:04:19.377789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.411077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198de8a8 00:36:50.727 [2024-10-08 21:04:19.414909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.414981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.438578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e3060 00:36:50.727 [2024-10-08 21:04:19.440970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.441010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:50.727 [2024-10-08 21:04:19.466756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fa3a0 00:36:50.727 [2024-10-08 21:04:19.470086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.727 [2024-10-08 21:04:19.470164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.496096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f0350 00:36:50.986 [2024-10-08 21:04:19.498700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.498772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.524976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ec840 00:36:50.986 [2024-10-08 21:04:19.528774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.528846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.555124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e6738 00:36:50.986 [2024-10-08 21:04:19.558902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.558972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.583291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fbcf0 00:36:50.986 [2024-10-08 21:04:19.586708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.586779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.611406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f1868 00:36:50.986 [2024-10-08 21:04:19.614307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.614379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.640738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198df118 00:36:50.986 [2024-10-08 21:04:19.643759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.643830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.676621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e6fa8 00:36:50.986 [2024-10-08 21:04:19.681305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.681376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.698214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e88f8 00:36:50.986 [2024-10-08 21:04:19.699704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.699742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:50.986 [2024-10-08 21:04:19.732459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f4f40 00:36:50.986 [2024-10-08 21:04:19.736408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:50.986 [2024-10-08 21:04:19.736490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.759503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e8088 00:36:51.244 [2024-10-08 21:04:19.763124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.763195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.787346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e3060 00:36:51.244 [2024-10-08 21:04:19.790482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.790553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.815459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f4298 00:36:51.244 [2024-10-08 21:04:19.817849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.817922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.846334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f4f40 00:36:51.244 [2024-10-08 21:04:19.849286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.849359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.879840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f7970 00:36:51.244 8918.00 IOPS, 34.84 MiB/s [2024-10-08T19:04:20.007Z] [2024-10-08 21:04:19.883811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.883891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.901047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198de038 00:36:51.244 [2024-10-08 21:04:19.903074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.903146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.939088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e3d08 00:36:51.244 [2024-10-08 21:04:19.943507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.943581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.960308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e1710 00:36:51.244 [2024-10-08 21:04:19.961671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.961710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:51.244 [2024-10-08 21:04:19.989021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f8618 00:36:51.244 [2024-10-08 21:04:19.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.244 [2024-10-08 21:04:19.991153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.011094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f92c0 00:36:51.504 [2024-10-08 21:04:20.012189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.012235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.024588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ee5c8 00:36:51.504 [2024-10-08 21:04:20.026085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.026122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.050298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f6458 00:36:51.504 [2024-10-08 21:04:20.052405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.052481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.087026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e4140 00:36:51.504 [2024-10-08 21:04:20.091522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.091600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.117422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e3060 00:36:51.504 [2024-10-08 21:04:20.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.122019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.145949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f57b0 00:36:51.504 [2024-10-08 21:04:20.150047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.150118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.172725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e1710 00:36:51.504 [2024-10-08 21:04:20.175954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.176024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.202096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198de8a8 00:36:51.504 [2024-10-08 21:04:20.205384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.205463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.231475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e4de8 00:36:51.504 [2024-10-08 21:04:20.233403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.233443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:51.504 [2024-10-08 21:04:20.253344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ee190 00:36:51.504 [2024-10-08 21:04:20.255806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.504 [2024-10-08 21:04:20.255878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.286252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e9168 00:36:51.765 [2024-10-08 21:04:20.290269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.290341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.309541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f20d8 00:36:51.765 [2024-10-08 21:04:20.313561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.313634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.338897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198eee38 00:36:51.765 [2024-10-08 21:04:20.341485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.341569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.366539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f1430 00:36:51.765 [2024-10-08 21:04:20.370707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.370779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.394724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e6738 00:36:51.765 [2024-10-08 21:04:20.398113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.398186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.411074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f7100 00:36:51.765 [2024-10-08 21:04:20.412953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.413048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.440186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e5a90 00:36:51.765 [2024-10-08 21:04:20.442371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.442444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.467009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198eff18 00:36:51.765 [2024-10-08 21:04:20.467848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.467881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.489505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e95a0 00:36:51.765 [2024-10-08 21:04:20.490811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.490849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:51.765 [2024-10-08 21:04:20.510546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ed0b0 00:36:51.765 [2024-10-08 21:04:20.512289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.765 [2024-10-08 21:04:20.512361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:52.025 [2024-10-08 21:04:20.544496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f3a28 00:36:52.026 [2024-10-08 21:04:20.548181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.548260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.572094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ecc78 00:36:52.026 [2024-10-08 21:04:20.575158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.575229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.600117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198fb048 00:36:52.026 [2024-10-08 21:04:20.603018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.603090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.632595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e84c0 00:36:52.026 [2024-10-08 21:04:20.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.636755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.660188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ea248 00:36:52.026 [2024-10-08 21:04:20.663432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.663503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.689469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f1ca0 00:36:52.026 [2024-10-08 21:04:20.692881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.692972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.717550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e8d30 00:36:52.026 [2024-10-08 21:04:20.720126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.720197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.746306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f46d0 00:36:52.026 [2024-10-08 21:04:20.748977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.749047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:52.026 [2024-10-08 21:04:20.775569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e01f8 00:36:52.026 [2024-10-08 21:04:20.777926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.026 [2024-10-08 21:04:20.777995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:52.286 [2024-10-08 21:04:20.813819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198e8d30 00:36:52.286 [2024-10-08 21:04:20.818524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.286 [2024-10-08 21:04:20.818594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:52.286 [2024-10-08 21:04:20.835057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198ebb98 00:36:52.286 [2024-10-08 21:04:20.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.286 [2024-10-08 21:04:20.837476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:52.286 [2024-10-08 21:04:20.870923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50be0) with pdu=0x2000198f8618 00:36:52.286 [2024-10-08 21:04:20.874937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.286 [2024-10-08 21:04:20.875008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:52.286 9095.50 IOPS, 35.53 MiB/s 00:36:52.286 Latency(us) 00:36:52.286 [2024-10-08T19:04:21.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.286 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:52.286 nvme0n1 : 2.02 9073.05 35.44 0.00 0.00 14069.05 3592.34 38253.61 00:36:52.286 [2024-10-08T19:04:21.049Z] =================================================================================================================== 00:36:52.286 [2024-10-08T19:04:21.049Z] Total : 9073.05 35.44 0.00 0.00 14069.05 3592.34 38253.61 00:36:52.286 { 00:36:52.286 "results": [ 00:36:52.286 { 00:36:52.286 "job": "nvme0n1", 00:36:52.286 "core_mask": "0x2", 00:36:52.286 "workload": "randwrite", 00:36:52.286 "status": "finished", 00:36:52.286 "queue_depth": 128, 00:36:52.286 "io_size": 4096, 00:36:52.286 "runtime": 2.019056, 00:36:52.286 "iops": 9073.05196091639, 00:36:52.286 "mibps": 35.441609222329646, 00:36:52.286 "io_failed": 0, 00:36:52.286 "io_timeout": 0, 00:36:52.286 "avg_latency_us": 14069.051996934977, 00:36:52.286 "min_latency_us": 3592.343703703704, 00:36:52.286 "max_latency_us": 38253.60592592593 00:36:52.286 } 00:36:52.286 ], 00:36:52.286 "core_count": 1 00:36:52.286 } 00:36:52.286 21:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:52.286 21:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:52.286 21:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:52.286 21:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:52.286 | .driver_specific 00:36:52.286 | .nvme_error 00:36:52.286 | .status_code 00:36:52.286 | .command_transient_transport_error' 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 71 > 0 )) 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1858555 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1858555 ']' 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1858555 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:52.855 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1858555 00:36:53.113 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:53.113 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:53.113 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1858555' 00:36:53.113 killing process with pid 1858555 00:36:53.114 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1858555 00:36:53.114 Received shutdown signal, test time was about 2.000000 seconds 00:36:53.114 00:36:53.114 Latency(us) 00:36:53.114 [2024-10-08T19:04:21.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.114 [2024-10-08T19:04:21.877Z] =================================================================================================================== 00:36:53.114 [2024-10-08T19:04:21.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:53.114 21:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1858555 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1859214 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1859214 /var/tmp/bperf.sock 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1859214 ']' 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:53.373 21:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:53.634 [2024-10-08 21:04:22.141757] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:53.634 [2024-10-08 21:04:22.141924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859214 ] 00:36:53.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.634 Zero copy mechanism will not be used. 00:36:53.634 [2024-10-08 21:04:22.288149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.895 [2024-10-08 21:04:22.505815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.465 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:54.465 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:54.465 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.465 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.035 21:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.293 nvme0n1 00:36:55.552 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:55.553 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.553 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.553 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.553 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:55.553 21:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.818 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:55.818 Zero copy mechanism will not be used. 00:36:55.819 Running I/O for 2 seconds... 00:36:55.819 [2024-10-08 21:04:24.357104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.357895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.371226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.371980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.372057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.384948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.385682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.385757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.398619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.399348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.399421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.412199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.412937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.413013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.425914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.426634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.426724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.439330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.440071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.440147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.452883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.453600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.453687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.466473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.467186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.467259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.479949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.480674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.480748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.493643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.494371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.494445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.506959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.507728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.507803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.520690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.521487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.534143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.534895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.547793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.548451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.819 [2024-10-08 21:04:24.548491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:55.819 [2024-10-08 21:04:24.557554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.819 [2024-10-08 21:04:24.558125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-10-08 21:04:24.558166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:55.820 [2024-10-08 21:04:24.569374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:55.820 [2024-10-08 21:04:24.569992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-10-08 21:04:24.570072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.582032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.582458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.582512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.594110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.594532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.594601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.605960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.606384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.606461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.618608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.619084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.631480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.631910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.631995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.644577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.645014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.645077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.657627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.082 [2024-10-08 21:04:24.658073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-10-08 21:04:24.658141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.082 [2024-10-08 21:04:24.670678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.671095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.671158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.683619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.684057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.696799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.697222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.697270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.709932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.710355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.723081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.723503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.723576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.736335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.736769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.736854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.749533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.749979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.750058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.762838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.763258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.763311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.776121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.776542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.776594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.789230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.789647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.789721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.802493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.802913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.815510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.815925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.815964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.828703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.829125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.829193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.083 [2024-10-08 21:04:24.842292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.083 [2024-10-08 21:04:24.842908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.083 [2024-10-08 21:04:24.842983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.856195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.856757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.869520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.869991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.870024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.882377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.882839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.882913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.895265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.908842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.909345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.909416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.922189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.922736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.922808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.935526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.936120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.936194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.948796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.949357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.962105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.962541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.962628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.975227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.975665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.975760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:24.988190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:24.988617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:24.988709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:25.000957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:25.001385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:25.001458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:25.013188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:25.013581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:25.013613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:25.023849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:25.024531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:25.024604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:25.035460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.344 [2024-10-08 21:04:25.035953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.344 [2024-10-08 21:04:25.036028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.344 [2024-10-08 21:04:25.045093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.345 [2024-10-08 21:04:25.045488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.345 [2024-10-08 21:04:25.045528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.345 [2024-10-08 21:04:25.056098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.345 [2024-10-08 21:04:25.056855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.345 [2024-10-08 21:04:25.056936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.345 [2024-10-08 21:04:25.070141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.345 [2024-10-08 21:04:25.070902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.345 [2024-10-08 21:04:25.070974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.345 [2024-10-08 21:04:25.084178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.345 [2024-10-08 21:04:25.084943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.345 [2024-10-08 21:04:25.085018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.345 [2024-10-08 21:04:25.097928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.345 [2024-10-08 21:04:25.098628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.345 [2024-10-08 21:04:25.098717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.112429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.113154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.113233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.125942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.126687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.126760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.139824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.140532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.140604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.153692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.154391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.167400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.168110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.168184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.181010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.181801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.181875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.194890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.195678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.195750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.208523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.209246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.209318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.222011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.222796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.222867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.235460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.236235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.236308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.248797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.249495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.249567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.262194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.262995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.263068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.275849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.276647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.289464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.290243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.290316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.302915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.303704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.303776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.316441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.317292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.330114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.330898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.606 [2024-10-08 21:04:25.343496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.347056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.347131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.606 2354.00 IOPS, 294.25 MiB/s [2024-10-08T19:04:25.369Z] [2024-10-08 21:04:25.359989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.606 [2024-10-08 21:04:25.360787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.606 [2024-10-08 21:04:25.360860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.374771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.375542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.375614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.388970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.389834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.402800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.403574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.403647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.416667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.417371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.417443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.430279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.431072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.431144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.443888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.867 [2024-10-08 21:04:25.444710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.867 [2024-10-08 21:04:25.444782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.867 [2024-10-08 21:04:25.457678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.458404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.458475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.471232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.472040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.472113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.484986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.485769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.485841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.498646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.499441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.499512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.512404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.513187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.513274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.525988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.526784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.526856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.539784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.540644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.553564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.554348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.554420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.565857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.566583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.566678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.578249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.578864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.578927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.590856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.591523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.591595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.602863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.603610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.868 [2024-10-08 21:04:25.616014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:56.868 [2024-10-08 21:04:25.616804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.868 [2024-10-08 21:04:25.616877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.630497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.631294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.631365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.644280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.645072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.645145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.658027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.658823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.658894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.671555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.672351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.672423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.685336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.686131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.686204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.698958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.699749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.699821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.712572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.713362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.713433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.725799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.726572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.726644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.739542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.740419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.753334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.128 [2024-10-08 21:04:25.754123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.128 [2024-10-08 21:04:25.754197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.128 [2024-10-08 21:04:25.766934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.767795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.780738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.781506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.781576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.794364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.795157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.795229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.807965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.808742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.808814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.821792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.822558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.822630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.835357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.836150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.836223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.849048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.849808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.849882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.861440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.861981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.871646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.872206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.872279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.129 [2024-10-08 21:04:25.883196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.129 [2024-10-08 21:04:25.883810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.129 [2024-10-08 21:04:25.883842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.895139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.895790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.895823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.905206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.905814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.905852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.917582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.918315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.918391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.931063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.931819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.931891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.944722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.945439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.958297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.959030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.959102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.972011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.972831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.985613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.986347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.986418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:25.999063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:25.999808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:25.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.012682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.013474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.026068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.026798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.026870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.039521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.040230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.040305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.053096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.053832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.053904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.066665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.067366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.067437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.080187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.080919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.093510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.094245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.094320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.106722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.107442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.107514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.120207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.120942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.121013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.133777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.134496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.390 [2024-10-08 21:04:26.147340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.390 [2024-10-08 21:04:26.148089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.390 [2024-10-08 21:04:26.148162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.161488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.162210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.162283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.173630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.173775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.173805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.185269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.185845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.196766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.197359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.197433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.208634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.209166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.209206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.222022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.222755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.222827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.235528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.236265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.249266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.249991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.250064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.262859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.263579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.263666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.276344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.277081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.277153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.289837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.290573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.290644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.303351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.304080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.304153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.316976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.317693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.317766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.330356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.331082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.331155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.651 [2024-10-08 21:04:26.342133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d80) with pdu=0x2000198fef90 00:36:57.651 [2024-10-08 21:04:26.342802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.651 [2024-10-08 21:04:26.342836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.652 2343.00 IOPS, 292.88 MiB/s 00:36:57.652 Latency(us) 00:36:57.652 [2024-10-08T19:04:26.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.652 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:57.652 nvme0n1 : 2.01 2343.31 292.91 0.00 0.00 6807.02 3956.43 16311.18 00:36:57.652 [2024-10-08T19:04:26.415Z] =================================================================================================================== 00:36:57.652 [2024-10-08T19:04:26.415Z] Total : 2343.31 292.91 0.00 0.00 6807.02 3956.43 16311.18 00:36:57.652 { 00:36:57.652 "results": [ 00:36:57.652 { 00:36:57.652 "job": "nvme0n1", 00:36:57.652 "core_mask": "0x2", 00:36:57.652 "workload": "randwrite", 00:36:57.652 "status": "finished", 00:36:57.652 "queue_depth": 16, 00:36:57.652 "io_size": 131072, 00:36:57.652 "runtime": 2.0087, 00:36:57.652 "iops": 2343.3066162194455, 00:36:57.652 "mibps": 292.9133270274307, 00:36:57.652 "io_failed": 0, 00:36:57.652 "io_timeout": 0, 00:36:57.652 "avg_latency_us": 6807.015767532989, 00:36:57.652 "min_latency_us": 3956.4325925925928, 00:36:57.652 "max_latency_us": 16311.182222222222 00:36:57.652 } 00:36:57.652 ], 00:36:57.652 "core_count": 1 00:36:57.652 } 00:36:57.652 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:57.652 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:57.652 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:57.652 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:57.652 | .driver_specific 00:36:57.652 | .nvme_error 00:36:57.652 | .status_code 00:36:57.652 | .command_transient_transport_error' 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1859214 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1859214 ']' 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1859214 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1859214 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:58.220 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1859214' 00:36:58.220 killing process with pid 1859214 00:36:58.221 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1859214 00:36:58.221 Received shutdown signal, test time was about 2.000000 seconds 00:36:58.221 00:36:58.221 Latency(us) 00:36:58.221 [2024-10-08T19:04:26.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.221 [2024-10-08T19:04:26.984Z] =================================================================================================================== 00:36:58.221 [2024-10-08T19:04:26.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.221 21:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1859214 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1857325 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1857325 ']' 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1857325 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857325 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857325' 00:36:58.791 killing process with pid 1857325 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1857325 00:36:58.791 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1857325 00:36:59.051 00:36:59.052 real 0m21.621s 00:36:59.052 user 0m45.886s 00:36:59.052 sys 0m5.634s 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.052 ************************************ 00:36:59.052 END TEST nvmf_digest_error 00:36:59.052 ************************************ 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.052 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.311 rmmod nvme_tcp 00:36:59.311 rmmod nvme_fabrics 00:36:59.311 rmmod nvme_keyring 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1857325 ']' 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1857325 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1857325 ']' 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1857325 00:36:59.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1857325) - No such process 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1857325 is not found' 00:36:59.311 Process with pid 1857325 is not found 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.311 21:04:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.221 00:37:01.221 real 0m50.398s 00:37:01.221 user 1m35.395s 00:37:01.221 sys 0m14.168s 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.221 ************************************ 00:37:01.221 END TEST nvmf_digest 00:37:01.221 ************************************ 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.221 21:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.483 ************************************ 00:37:01.483 START TEST nvmf_bdevperf 00:37:01.483 ************************************ 00:37:01.483 21:04:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:01.483 * Looking for test storage... 00:37:01.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.483 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:01.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.484 --rc genhtml_branch_coverage=1 00:37:01.484 --rc genhtml_function_coverage=1 00:37:01.484 --rc genhtml_legend=1 00:37:01.484 --rc geninfo_all_blocks=1 00:37:01.484 --rc geninfo_unexecuted_blocks=1 00:37:01.484 00:37:01.484 ' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:01.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.484 --rc genhtml_branch_coverage=1 00:37:01.484 --rc genhtml_function_coverage=1 00:37:01.484 --rc genhtml_legend=1 00:37:01.484 --rc geninfo_all_blocks=1 00:37:01.484 --rc geninfo_unexecuted_blocks=1 00:37:01.484 00:37:01.484 ' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:01.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.484 --rc genhtml_branch_coverage=1 00:37:01.484 --rc genhtml_function_coverage=1 00:37:01.484 --rc genhtml_legend=1 00:37:01.484 --rc geninfo_all_blocks=1 00:37:01.484 --rc geninfo_unexecuted_blocks=1 00:37:01.484 00:37:01.484 ' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:01.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.484 --rc genhtml_branch_coverage=1 00:37:01.484 --rc genhtml_function_coverage=1 00:37:01.484 --rc genhtml_legend=1 00:37:01.484 --rc geninfo_all_blocks=1 00:37:01.484 --rc geninfo_unexecuted_blocks=1 00:37:01.484 00:37:01.484 ' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:01.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.484 21:04:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:04.784 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:04.784 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:04.784 Found net devices under 0000:84:00.0: cvl_0_0 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:04.784 Found net devices under 0000:84:00.1: cvl_0_1 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:37:04.784 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:37:04.785 00:37:04.785 --- 10.0.0.2 ping statistics --- 00:37:04.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.785 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:37:04.785 00:37:04.785 --- 10.0.0.1 ping statistics --- 00:37:04.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.785 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1861845 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1861845 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1861845 ']' 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:04.785 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.785 [2024-10-08 21:04:33.312134] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:04.785 [2024-10-08 21:04:33.312228] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.785 [2024-10-08 21:04:33.415854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:05.045 [2024-10-08 21:04:33.648042] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.045 [2024-10-08 21:04:33.648157] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.045 [2024-10-08 21:04:33.648193] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.045 [2024-10-08 21:04:33.648233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.045 [2024-10-08 21:04:33.648245] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.045 [2024-10-08 21:04:33.650010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.045 [2024-10-08 21:04:33.650003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:05.045 [2024-10-08 21:04:33.649898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.045 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:05.045 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:05.045 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:05.045 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:05.045 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.304 [2024-10-08 21:04:33.823639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.304 Malloc0 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.304 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.305 [2024-10-08 21:04:33.884295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:05.305 { 00:37:05.305 "params": { 00:37:05.305 "name": "Nvme$subsystem", 00:37:05.305 "trtype": "$TEST_TRANSPORT", 00:37:05.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.305 "adrfam": "ipv4", 00:37:05.305 "trsvcid": "$NVMF_PORT", 00:37:05.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.305 "hdgst": ${hdgst:-false}, 00:37:05.305 "ddgst": ${ddgst:-false} 00:37:05.305 }, 00:37:05.305 "method": "bdev_nvme_attach_controller" 00:37:05.305 } 00:37:05.305 EOF 00:37:05.305 )") 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:37:05.305 21:04:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:05.305 "params": { 00:37:05.305 "name": "Nvme1", 00:37:05.305 "trtype": "tcp", 00:37:05.305 "traddr": "10.0.0.2", 00:37:05.305 "adrfam": "ipv4", 00:37:05.305 "trsvcid": "4420", 00:37:05.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.305 "hdgst": false, 00:37:05.305 "ddgst": false 00:37:05.305 }, 00:37:05.305 "method": "bdev_nvme_attach_controller" 00:37:05.305 }' 00:37:05.305 [2024-10-08 21:04:33.940019] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:05.305 [2024-10-08 21:04:33.940101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861995 ] 00:37:05.305 [2024-10-08 21:04:34.009914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.564 [2024-10-08 21:04:34.135153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.822 Running I/O for 1 seconds... 00:37:06.757 8555.00 IOPS, 33.42 MiB/s 00:37:06.757 Latency(us) 00:37:06.757 [2024-10-08T19:04:35.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:06.757 Verification LBA range: start 0x0 length 0x4000 00:37:06.757 Nvme1n1 : 1.01 8606.89 33.62 0.00 0.00 14808.63 1407.81 13689.74 00:37:06.757 [2024-10-08T19:04:35.520Z] =================================================================================================================== 00:37:06.757 [2024-10-08T19:04:35.520Z] Total : 8606.89 33.62 0.00 0.00 14808.63 1407.81 13689.74 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1862134 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:07.015 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:07.015 { 00:37:07.015 "params": { 00:37:07.015 "name": "Nvme$subsystem", 00:37:07.016 "trtype": "$TEST_TRANSPORT", 00:37:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.016 "adrfam": "ipv4", 00:37:07.016 "trsvcid": "$NVMF_PORT", 00:37:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.016 "hdgst": ${hdgst:-false}, 00:37:07.016 "ddgst": ${ddgst:-false} 00:37:07.016 }, 00:37:07.016 "method": "bdev_nvme_attach_controller" 00:37:07.016 } 00:37:07.016 EOF 00:37:07.016 )") 00:37:07.016 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:37:07.016 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:37:07.016 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:37:07.016 21:04:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:07.016 "params": { 00:37:07.016 "name": "Nvme1", 00:37:07.016 "trtype": "tcp", 00:37:07.016 "traddr": "10.0.0.2", 00:37:07.016 "adrfam": "ipv4", 00:37:07.016 "trsvcid": "4420", 00:37:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:07.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:07.016 "hdgst": false, 00:37:07.016 "ddgst": false 00:37:07.016 }, 00:37:07.016 "method": "bdev_nvme_attach_controller" 00:37:07.016 }' 00:37:07.016 [2024-10-08 21:04:35.715488] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:07.016 [2024-10-08 21:04:35.715576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862134 ] 00:37:07.274 [2024-10-08 21:04:35.780820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.274 [2024-10-08 21:04:35.893660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.533 Running I/O for 15 seconds... 00:37:09.403 8628.00 IOPS, 33.70 MiB/s [2024-10-08T19:04:38.735Z] 8626.50 IOPS, 33.70 MiB/s [2024-10-08T19:04:38.735Z] 21:04:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1861845 00:37:09.972 21:04:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:09.972 [2024-10-08 21:04:38.677798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.972 [2024-10-08 21:04:38.677846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.677879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.677898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.677918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.677934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.677951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.678952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.678991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.973 [2024-10-08 21:04:38.679887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.973 [2024-10-08 21:04:38.679921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.973 [2024-10-08 21:04:38.679965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.679980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.973 [2024-10-08 21:04:38.679994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.680025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.973 [2024-10-08 21:04:38.680060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.680098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.973 [2024-10-08 21:04:38.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.973 [2024-10-08 21:04:38.680170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.680929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.680964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.681940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.681982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.974 [2024-10-08 21:04:38.682603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.974 [2024-10-08 21:04:38.682636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.682934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.682947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.683942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.683982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.975 [2024-10-08 21:04:38.684919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.975 [2024-10-08 21:04:38.684934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.976 [2024-10-08 21:04:38.684974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.976 [2024-10-08 21:04:38.685045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.976 [2024-10-08 21:04:38.685116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.976 [2024-10-08 21:04:38.685186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.976 [2024-10-08 21:04:38.685624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a8d30 is same with the state(6) to be set 00:37:09.976 [2024-10-08 21:04:38.685725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:09.976 [2024-10-08 21:04:38.685736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:09.976 [2024-10-08 21:04:38.685747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:37:09.976 [2024-10-08 21:04:38.685760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685818] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17a8d30 was disconnected and freed. reset controller. 00:37:09.976 [2024-10-08 21:04:38.685889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.976 [2024-10-08 21:04:38.685910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.976 [2024-10-08 21:04:38.685963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.685976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.976 [2024-10-08 21:04:38.685999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.686037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.976 [2024-10-08 21:04:38.686071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.976 [2024-10-08 21:04:38.686102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:09.976 [2024-10-08 21:04:38.693501] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.976 [2024-10-08 21:04:38.693586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:09.976 [2024-10-08 21:04:38.694902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.976 [2024-10-08 21:04:38.694974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:09.976 [2024-10-08 21:04:38.695014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:09.976 [2024-10-08 21:04:38.695553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:09.976 [2024-10-08 21:04:38.696135] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.976 [2024-10-08 21:04:38.696191] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.976 [2024-10-08 21:04:38.696229] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.976 [2024-10-08 21:04:38.704345] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.976 [2024-10-08 21:04:38.712997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.976 [2024-10-08 21:04:38.713817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.976 [2024-10-08 21:04:38.713891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:09.976 [2024-10-08 21:04:38.713932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:09.976 [2024-10-08 21:04:38.714468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:09.976 [2024-10-08 21:04:38.715039] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.976 [2024-10-08 21:04:38.715095] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.976 [2024-10-08 21:04:38.715129] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.976 [2024-10-08 21:04:38.723240] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.976 [2024-10-08 21:04:38.731420] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.976 [2024-10-08 21:04:38.732272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.976 [2024-10-08 21:04:38.732348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:09.976 [2024-10-08 21:04:38.732390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:09.976 [2024-10-08 21:04:38.732879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.236 [2024-10-08 21:04:38.733193] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.236 [2024-10-08 21:04:38.733222] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.236 [2024-10-08 21:04:38.733240] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.236 [2024-10-08 21:04:38.741318] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.236 [2024-10-08 21:04:38.750070] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.236 [2024-10-08 21:04:38.750897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.236 [2024-10-08 21:04:38.750971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.236 [2024-10-08 21:04:38.751012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.751549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.752116] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.752170] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.752203] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.760394] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.769034] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.769819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.769891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.769933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.770473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.771048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.771102] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.771137] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.779252] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.787890] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.788675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.788748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.788788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.789322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.789893] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.789946] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.789980] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.798079] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.806711] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.807513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.807583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.807623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.808179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.808749] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.808804] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.808838] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.816950] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.825610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.826443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.826513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.826567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.827137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.827715] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.827768] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.827804] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.835955] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.844591] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.845440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.845511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.845553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.846112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.846680] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.846733] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.846767] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.854885] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.863514] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.864336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.864407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.864446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.865005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.865555] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.865607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.865641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.873774] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.882393] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.883187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.883258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.883298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.883856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.884406] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.884470] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.884506] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.892619] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.901251] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.902067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.902137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.902176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.902739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.903288] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.903340] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.903374] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.911485] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.920114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.920928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.920999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.921038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.921572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.922143] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.922196] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.922230] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.930345] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.939017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.237 [2024-10-08 21:04:38.939834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.237 [2024-10-08 21:04:38.939904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.237 [2024-10-08 21:04:38.939945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.237 [2024-10-08 21:04:38.940479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.237 [2024-10-08 21:04:38.941046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.237 [2024-10-08 21:04:38.941100] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.237 [2024-10-08 21:04:38.941133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.237 [2024-10-08 21:04:38.949245] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.237 [2024-10-08 21:04:38.957933] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.238 [2024-10-08 21:04:38.958724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.238 [2024-10-08 21:04:38.958795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.238 [2024-10-08 21:04:38.958836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.238 [2024-10-08 21:04:38.959373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.238 [2024-10-08 21:04:38.959944] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.238 [2024-10-08 21:04:38.959998] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.238 [2024-10-08 21:04:38.960032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.238 [2024-10-08 21:04:38.968157] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.238 [2024-10-08 21:04:38.976810] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.238 [2024-10-08 21:04:38.977610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.238 [2024-10-08 21:04:38.977698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.238 [2024-10-08 21:04:38.977742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.238 [2024-10-08 21:04:38.978278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.238 [2024-10-08 21:04:38.978849] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.238 [2024-10-08 21:04:38.978904] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.238 [2024-10-08 21:04:38.978939] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.238 [2024-10-08 21:04:38.987073] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.238 [2024-10-08 21:04:38.995437] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.238 [2024-10-08 21:04:38.996289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.238 [2024-10-08 21:04:38.996329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.238 [2024-10-08 21:04:38.996351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.238 [2024-10-08 21:04:38.996642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.238 [2024-10-08 21:04:38.997206] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.238 [2024-10-08 21:04:38.997262] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.238 [2024-10-08 21:04:38.997297] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.497 [2024-10-08 21:04:39.004806] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.497 [2024-10-08 21:04:39.014393] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.497 [2024-10-08 21:04:39.015242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.497 [2024-10-08 21:04:39.015314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.497 [2024-10-08 21:04:39.015356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.497 [2024-10-08 21:04:39.015939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.497 [2024-10-08 21:04:39.016490] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.497 [2024-10-08 21:04:39.016541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.497 [2024-10-08 21:04:39.016576] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.497 [2024-10-08 21:04:39.024722] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.033352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.034210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.034282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.034323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.034888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.035442] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.035495] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.035529] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.043706] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.052313] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.053200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.053273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.053314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.053877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.054428] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.054480] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.054513] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.062630] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.071270] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.072096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.072167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.072207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.072770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.073321] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.073373] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.073420] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.081273] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.090388] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.091265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.091336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.091377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.091939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.092490] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.092542] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.092576] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.100714] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.109332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.110180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.110251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.110290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.110850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.111399] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.111450] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.111486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.119600] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.128254] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.129079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.129149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.129199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.129758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.130316] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.130368] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.130401] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.138558] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.147213] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.148080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.148150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.148190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.148753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 7259.67 IOPS, 28.36 MiB/s [2024-10-08T19:04:39.261Z] [2024-10-08 21:04:39.153212] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.153261] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.153295] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.157696] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:10.498 [2024-10-08 21:04:39.161532] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.176417] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.177215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.177287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.177327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.177891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.178442] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.178494] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.178528] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.186682] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.195347] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.196170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.196242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.196282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.196841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.197390] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.197442] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.197476] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.205285] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.214032] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.214773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.214845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.214885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.498 [2024-10-08 21:04:39.215432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.498 [2024-10-08 21:04:39.216008] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.498 [2024-10-08 21:04:39.216063] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.498 [2024-10-08 21:04:39.216099] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.498 [2024-10-08 21:04:39.224244] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.498 [2024-10-08 21:04:39.232906] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.498 [2024-10-08 21:04:39.233644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.498 [2024-10-08 21:04:39.233732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.498 [2024-10-08 21:04:39.233775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.499 [2024-10-08 21:04:39.234311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.499 [2024-10-08 21:04:39.234885] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.499 [2024-10-08 21:04:39.234940] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.499 [2024-10-08 21:04:39.234974] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.499 [2024-10-08 21:04:39.243132] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.499 [2024-10-08 21:04:39.251784] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.499 [2024-10-08 21:04:39.252556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.499 [2024-10-08 21:04:39.252625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.499 [2024-10-08 21:04:39.252686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.499 [2024-10-08 21:04:39.253244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.499 [2024-10-08 21:04:39.253816] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.499 [2024-10-08 21:04:39.253869] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.499 [2024-10-08 21:04:39.253903] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.261100] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.270450] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.271251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.271322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.271363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.271925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.272473] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.272524] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.272570] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.280726] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.289395] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.290217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.290288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.290328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.290885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.291438] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.291490] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.291524] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.299645] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.308307] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.309115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.309185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.309225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.309782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.310333] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.310386] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.310422] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.318554] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.327224] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.328027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.328099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.328139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.328707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.329257] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.329309] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.329342] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.337473] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.346150] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.346915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.346986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.347026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.347562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.348131] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.348195] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.348229] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.356340] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.364972] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.365779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.759 [2024-10-08 21:04:39.365851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.759 [2024-10-08 21:04:39.365890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.759 [2024-10-08 21:04:39.366426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.759 [2024-10-08 21:04:39.366998] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.759 [2024-10-08 21:04:39.367051] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.759 [2024-10-08 21:04:39.367086] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.759 [2024-10-08 21:04:39.375209] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.759 [2024-10-08 21:04:39.383597] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.759 [2024-10-08 21:04:39.384422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.384493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.384533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.385088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.385634] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.385701] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.385737] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.393856] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.402550] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.403408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.403479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.403519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.404097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.404648] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.404722] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.404758] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.412889] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.421513] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.422352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.422432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.422473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.423037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.423588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.423639] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.423705] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.431845] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.440499] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.441338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.441410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.441450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.442016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.442566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.442619] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.442669] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.450798] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.459407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.460260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.460332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.460372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.460937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.461511] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.461564] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.461610] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.469751] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.478368] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.479205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.479277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.479316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.479882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.480432] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.480484] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.480518] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.488638] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.497281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.498124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.498194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.498234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.498797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.499347] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.499399] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.499434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.760 [2024-10-08 21:04:39.507548] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.760 [2024-10-08 21:04:39.516149] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.760 [2024-10-08 21:04:39.516735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.760 [2024-10-08 21:04:39.516816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:10.760 [2024-10-08 21:04:39.516858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:10.760 [2024-10-08 21:04:39.517395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:10.760 [2024-10-08 21:04:39.518010] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.760 [2024-10-08 21:04:39.518067] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.760 [2024-10-08 21:04:39.518101] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.020 [2024-10-08 21:04:39.525223] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.020 [2024-10-08 21:04:39.535140] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.020 [2024-10-08 21:04:39.535972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.020 [2024-10-08 21:04:39.536061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.020 [2024-10-08 21:04:39.536103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.020 [2024-10-08 21:04:39.536641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.020 [2024-10-08 21:04:39.537222] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.020 [2024-10-08 21:04:39.537274] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.020 [2024-10-08 21:04:39.537308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.020 [2024-10-08 21:04:39.545460] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.020 [2024-10-08 21:04:39.554106] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.020 [2024-10-08 21:04:39.554948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.020 [2024-10-08 21:04:39.555019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.020 [2024-10-08 21:04:39.555059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.020 [2024-10-08 21:04:39.555594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.020 [2024-10-08 21:04:39.556165] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.020 [2024-10-08 21:04:39.556220] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.020 [2024-10-08 21:04:39.556254] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.020 [2024-10-08 21:04:39.564368] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.020 [2024-10-08 21:04:39.573005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.020 [2024-10-08 21:04:39.573831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.020 [2024-10-08 21:04:39.573902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.020 [2024-10-08 21:04:39.573942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.020 [2024-10-08 21:04:39.574477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.020 [2024-10-08 21:04:39.575049] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.575103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.575138] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.583265] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.591905] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.592726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.592798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.592838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.593374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.593962] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.594016] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.594051] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.602175] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.610806] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.611618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.611706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.611749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.612285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.612858] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.612911] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.612946] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.621061] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.629714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.630521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.630591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.630630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.631188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.631766] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.631819] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.631854] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.640017] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.648632] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.649460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.649531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.649570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.650129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.650699] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.650754] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.650788] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.658977] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.667602] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.668443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.668516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.668555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.669115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.669685] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.669738] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.669774] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.677904] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.686534] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.687381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.687452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.687493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.688056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.688606] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.688674] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.688714] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.696842] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.702951] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.703616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.703704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.703746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.704282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.704673] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.704726] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.704760] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.713062] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.721765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.722575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.722645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.722721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.723261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.723842] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.723896] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.723931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.732043] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.740721] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.741528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.741598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.741639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.742203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.742774] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.742829] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.742863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.750986] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.759610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.760429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.760500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.760539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.021 [2024-10-08 21:04:39.761098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.021 [2024-10-08 21:04:39.761648] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.021 [2024-10-08 21:04:39.761720] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.021 [2024-10-08 21:04:39.761755] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.021 [2024-10-08 21:04:39.769880] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.021 [2024-10-08 21:04:39.778219] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.021 [2024-10-08 21:04:39.779061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.021 [2024-10-08 21:04:39.779145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.021 [2024-10-08 21:04:39.779194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.022 [2024-10-08 21:04:39.779735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.022 [2024-10-08 21:04:39.780033] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.022 [2024-10-08 21:04:39.780068] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.022 [2024-10-08 21:04:39.780088] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.282 [2024-10-08 21:04:39.787571] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.282 [2024-10-08 21:04:39.797179] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.282 [2024-10-08 21:04:39.797986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.282 [2024-10-08 21:04:39.798058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.282 [2024-10-08 21:04:39.798098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.282 [2024-10-08 21:04:39.798634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.282 [2024-10-08 21:04:39.799205] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.282 [2024-10-08 21:04:39.799255] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.282 [2024-10-08 21:04:39.799289] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.282 [2024-10-08 21:04:39.807412] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.816046] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.816893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.816965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.817005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.817540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.818113] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.818166] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.818201] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.826336] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.834982] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.835786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.835858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.835898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.836434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.837013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.837068] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.837103] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.845359] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.854031] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.854825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.854896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.854937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.855473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.856046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.856100] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.856134] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.863857] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.872976] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.873737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.873808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.873848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.874384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.874947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.875001] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.875035] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.883172] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.891861] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.892723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.892795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.892835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.893372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.893944] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.893998] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.894032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.902164] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.910835] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.911683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.911756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.911796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.912344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.912923] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.912978] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.913013] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.920876] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.930002] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.930808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.930884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.930925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.931460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.932038] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.932093] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.932128] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.940245] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.948918] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.949744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.949815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.949856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.950391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.950964] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.951019] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.951053] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.956090] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.967594] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.968436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.968506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.968546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.969106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.969672] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.969725] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.969772] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.977910] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:39.986543] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:39.988199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:39.988286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.283 [2024-10-08 21:04:39.988329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.283 [2024-10-08 21:04:39.988894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.283 [2024-10-08 21:04:39.989447] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.283 [2024-10-08 21:04:39.989501] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.283 [2024-10-08 21:04:39.989537] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.283 [2024-10-08 21:04:39.996808] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:11.283 [2024-10-08 21:04:39.997796] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.283 [2024-10-08 21:04:40.014206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.283 [2024-10-08 21:04:40.014954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.283 [2024-10-08 21:04:40.015027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.284 [2024-10-08 21:04:40.015069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.284 [2024-10-08 21:04:40.015604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.284 [2024-10-08 21:04:40.015970] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.284 [2024-10-08 21:04:40.016025] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.284 [2024-10-08 21:04:40.016060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.284 [2024-10-08 21:04:40.022323] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.284 [2024-10-08 21:04:40.033042] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.284 [2024-10-08 21:04:40.033834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.284 [2024-10-08 21:04:40.033905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.284 [2024-10-08 21:04:40.033946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.284 [2024-10-08 21:04:40.034484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.284 [2024-10-08 21:04:40.035058] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.284 [2024-10-08 21:04:40.035113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.284 [2024-10-08 21:04:40.035149] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.284 [2024-10-08 21:04:40.042593] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.544 [2024-10-08 21:04:40.051083] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.544 [2024-10-08 21:04:40.051916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.544 [2024-10-08 21:04:40.051990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.544 [2024-10-08 21:04:40.052034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.544 [2024-10-08 21:04:40.052569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.544 [2024-10-08 21:04:40.053140] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.544 [2024-10-08 21:04:40.053202] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.544 [2024-10-08 21:04:40.053237] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.544 [2024-10-08 21:04:40.061377] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.544 [2024-10-08 21:04:40.070032] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.544 [2024-10-08 21:04:40.070853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.544 [2024-10-08 21:04:40.070925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.544 [2024-10-08 21:04:40.070965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.544 [2024-10-08 21:04:40.071500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.544 [2024-10-08 21:04:40.072078] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.072132] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.072167] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.080286] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.088938] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.089723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.089796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.089836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.090376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.090953] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.091007] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.091042] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.099150] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.107769] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.108578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.108647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.108721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.109271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.109843] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.109898] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.109933] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.118048] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.126696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.127512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.127582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.127622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.128178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.128750] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.128803] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.128836] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.136955] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.145616] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.146433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.146504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.146545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.147107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.147674] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.147735] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.147770] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 5444.75 IOPS, 21.27 MiB/s [2024-10-08T19:04:40.308Z] [2024-10-08 21:04:40.155992] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:11.545 [2024-10-08 21:04:40.159835] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.174712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.175490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.175561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.175601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.176170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.176750] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.176804] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.176838] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.184962] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.193573] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.194438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.194509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.194549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.195113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.195692] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.195746] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.195780] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.203887] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.209344] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.209992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.210064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.210104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.210639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.211209] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.211262] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.211295] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.219407] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.228534] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.229358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.229427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.229467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.230026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.230576] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.230628] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.230682] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.238797] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.247485] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.545 [2024-10-08 21:04:40.248306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.545 [2024-10-08 21:04:40.248376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.545 [2024-10-08 21:04:40.248416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.545 [2024-10-08 21:04:40.248972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.545 [2024-10-08 21:04:40.249521] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.545 [2024-10-08 21:04:40.249572] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.545 [2024-10-08 21:04:40.249606] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.545 [2024-10-08 21:04:40.257729] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.545 [2024-10-08 21:04:40.266343] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.546 [2024-10-08 21:04:40.267132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.546 [2024-10-08 21:04:40.267202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.546 [2024-10-08 21:04:40.267243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.546 [2024-10-08 21:04:40.267803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.546 [2024-10-08 21:04:40.268352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.546 [2024-10-08 21:04:40.268404] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.546 [2024-10-08 21:04:40.268438] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.546 [2024-10-08 21:04:40.276553] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.546 [2024-10-08 21:04:40.285178] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.546 [2024-10-08 21:04:40.285987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.546 [2024-10-08 21:04:40.286056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.546 [2024-10-08 21:04:40.286096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.546 [2024-10-08 21:04:40.286630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.546 [2024-10-08 21:04:40.287203] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.546 [2024-10-08 21:04:40.287256] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.546 [2024-10-08 21:04:40.287290] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.546 [2024-10-08 21:04:40.295420] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.546 [2024-10-08 21:04:40.303842] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.546 [2024-10-08 21:04:40.304389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.546 [2024-10-08 21:04:40.304471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.546 [2024-10-08 21:04:40.304527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.546 [2024-10-08 21:04:40.305092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.546 [2024-10-08 21:04:40.305772] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.546 [2024-10-08 21:04:40.305835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.546 [2024-10-08 21:04:40.305871] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.806 [2024-10-08 21:04:40.313598] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.806 [2024-10-08 21:04:40.322739] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.806 [2024-10-08 21:04:40.323562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.806 [2024-10-08 21:04:40.323635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.806 [2024-10-08 21:04:40.323711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.806 [2024-10-08 21:04:40.324251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.806 [2024-10-08 21:04:40.324821] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.806 [2024-10-08 21:04:40.324875] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.806 [2024-10-08 21:04:40.324911] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.806 [2024-10-08 21:04:40.333023] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.806 [2024-10-08 21:04:40.341687] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.806 [2024-10-08 21:04:40.342459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.806 [2024-10-08 21:04:40.342530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.806 [2024-10-08 21:04:40.342571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.343126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.343722] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.343778] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.343813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.351941] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.360562] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.361376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.361448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.361489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.362052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.362601] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.362686] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.362726] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.370845] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.379465] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.380266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.380338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.380379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.380939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.381491] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.381542] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.381576] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.388072] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.397679] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.398222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.398292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.398332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.398890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.399436] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.399489] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.399523] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.406195] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.415546] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.416171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.416242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.416283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.416795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.417268] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.417321] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.417356] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.424248] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.433081] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.433833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.433872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.433894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.434436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.434883] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.434913] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.434932] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.442875] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.447899] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.448398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.448429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.448447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.448697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.448940] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.448962] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.448977] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.453271] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.462649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.463113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.463145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.463162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.463401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.463644] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.463677] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.463693] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.467264] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.478897] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.479600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.479689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.479728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.479973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.480216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.480238] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.480254] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.486597] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.496412] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.496847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.496900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.496922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.497176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.497417] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.497440] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.497455] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.504471] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.515129] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.807 [2024-10-08 21:04:40.515620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.807 [2024-10-08 21:04:40.515708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.807 [2024-10-08 21:04:40.515759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.807 [2024-10-08 21:04:40.515997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.807 [2024-10-08 21:04:40.516239] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.807 [2024-10-08 21:04:40.516262] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.807 [2024-10-08 21:04:40.516277] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.807 [2024-10-08 21:04:40.524024] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.807 [2024-10-08 21:04:40.534197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.808 [2024-10-08 21:04:40.534734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.808 [2024-10-08 21:04:40.534805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.808 [2024-10-08 21:04:40.534858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.808 [2024-10-08 21:04:40.535098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.808 [2024-10-08 21:04:40.535340] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.808 [2024-10-08 21:04:40.535363] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.808 [2024-10-08 21:04:40.535384] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.808 [2024-10-08 21:04:40.543147] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.808 [2024-10-08 21:04:40.552950] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.808 [2024-10-08 21:04:40.553462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.808 [2024-10-08 21:04:40.553534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:11.808 [2024-10-08 21:04:40.553588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:11.808 [2024-10-08 21:04:40.553837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:11.808 [2024-10-08 21:04:40.554080] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.808 [2024-10-08 21:04:40.554104] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.808 [2024-10-08 21:04:40.554119] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.808 [2024-10-08 21:04:40.561869] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.571027] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.571546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.571580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.571599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.571849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.572113] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.572140] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.572205] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.579753] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.589968] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.590510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.590582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.590634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.590882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.591125] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.591148] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.591163] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.598905] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.609104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.609632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.609733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.609788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.610027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.610269] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.610291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.610306] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.617981] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.628168] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.628703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.628775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.628827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.629066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.629309] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.629331] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.629347] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.637050] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.647272] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.647817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.647890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.647942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.648181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.648423] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.648446] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.648461] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.656202] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.666384] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.666925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.666994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.667045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.667283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.667531] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.667555] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.667570] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.675280] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.685480] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.685996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.686027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.686045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.686283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.686745] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.686769] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.686784] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.693332] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.704048] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.704570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.068 [2024-10-08 21:04:40.704639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.068 [2024-10-08 21:04:40.704714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.068 [2024-10-08 21:04:40.704953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.068 [2024-10-08 21:04:40.705195] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.068 [2024-10-08 21:04:40.705217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.068 [2024-10-08 21:04:40.705233] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.068 [2024-10-08 21:04:40.712968] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.068 [2024-10-08 21:04:40.720408] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.068 [2024-10-08 21:04:40.720948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.721021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.721074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.721313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.721555] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.721578] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.721593] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.729697] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.069 [2024-10-08 21:04:40.739400] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.069 [2024-10-08 21:04:40.739942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.740012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.740066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.740304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.740546] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.740570] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.740585] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.747636] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.069 [2024-10-08 21:04:40.757024] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.069 [2024-10-08 21:04:40.757549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.757618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.757685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.757926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.758168] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.758190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.758206] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.765805] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.069 [2024-10-08 21:04:40.775992] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.069 [2024-10-08 21:04:40.776511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.776581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.776635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.776885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.777127] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.777150] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.777165] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.784901] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.069 [2024-10-08 21:04:40.795083] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.069 [2024-10-08 21:04:40.795617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.795705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.795767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.796006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.796248] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.796271] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.796286] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.804029] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.069 [2024-10-08 21:04:40.813755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.069 [2024-10-08 21:04:40.814313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.069 [2024-10-08 21:04:40.814385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.069 [2024-10-08 21:04:40.814437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.069 [2024-10-08 21:04:40.814687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.069 [2024-10-08 21:04:40.814930] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.069 [2024-10-08 21:04:40.814953] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.069 [2024-10-08 21:04:40.814968] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.069 [2024-10-08 21:04:40.822710] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.329 [2024-10-08 21:04:40.831936] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.329 [2024-10-08 21:04:40.832480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.329 [2024-10-08 21:04:40.832553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.329 [2024-10-08 21:04:40.832609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.329 [2024-10-08 21:04:40.832892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.329 [2024-10-08 21:04:40.833138] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.329 [2024-10-08 21:04:40.833162] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.329 [2024-10-08 21:04:40.833178] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.329 [2024-10-08 21:04:40.840747] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.329 [2024-10-08 21:04:40.850975] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.329 [2024-10-08 21:04:40.851515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.329 [2024-10-08 21:04:40.851586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.329 [2024-10-08 21:04:40.851637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.329 [2024-10-08 21:04:40.851885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.329 [2024-10-08 21:04:40.852129] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.329 [2024-10-08 21:04:40.852158] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.329 [2024-10-08 21:04:40.852174] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.329 [2024-10-08 21:04:40.859909] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.329 [2024-10-08 21:04:40.870148] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.329 [2024-10-08 21:04:40.870689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.329 [2024-10-08 21:04:40.870762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.870811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.871050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.871292] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.871315] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.871330] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.879093] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.888845] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.889395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.889466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.889518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.889768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.890011] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.890034] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.890050] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.897789] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.907992] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.908501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.908571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.908621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.908868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.909112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.909135] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.909150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.916889] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.927144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.927634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.927720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.927771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.928009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.928250] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.928273] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.928288] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.936084] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.946308] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.946839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.946909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.946962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.947208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.947450] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.947472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.947487] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.955215] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.965405] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.965895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.965926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.965945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.966184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.966426] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.966449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.966464] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.970800] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:40.984296] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:40.984811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:40.984881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:40.984940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:40.985180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:40.985423] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:40.985445] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:40.985460] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:40.993210] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:41.003389] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:41.003904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:41.003976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:41.004029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:41.004267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:41.004509] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:41.004533] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:41.004548] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:41.012290] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:41.022462] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:41.023003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:41.023074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:41.023127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:41.023373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:41.023615] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:41.023638] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:41.023670] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:41.031387] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:41.041571] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:41.042111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:41.042181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:41.042234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:41.042472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:41.042727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:41.042757] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:41.042773] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.330 [2024-10-08 21:04:41.050498] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.330 [2024-10-08 21:04:41.060712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.330 [2024-10-08 21:04:41.061169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.330 [2024-10-08 21:04:41.061239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.330 [2024-10-08 21:04:41.061288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.330 [2024-10-08 21:04:41.061527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.330 [2024-10-08 21:04:41.061802] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.330 [2024-10-08 21:04:41.061844] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.330 [2024-10-08 21:04:41.061880] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.331 [2024-10-08 21:04:41.069559] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.331 [2024-10-08 21:04:41.079746] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.331 [2024-10-08 21:04:41.080278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.331 [2024-10-08 21:04:41.080348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.331 [2024-10-08 21:04:41.080398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.331 [2024-10-08 21:04:41.080636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.331 [2024-10-08 21:04:41.080890] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.331 [2024-10-08 21:04:41.080913] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.331 [2024-10-08 21:04:41.080929] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.331 [2024-10-08 21:04:41.088548] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.098366] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.098924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.098999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.099050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.099289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.099532] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.099555] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.099571] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.107309] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.117499] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.118026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.118097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.118148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.118387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.118629] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.118664] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.118683] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.126417] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.136639] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.137187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.137258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.137312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.137552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.137806] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.137830] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.137845] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.145539] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 4355.80 IOPS, 17.01 MiB/s [2024-10-08T19:04:41.354Z] [2024-10-08 21:04:41.157786] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:12.591 [2024-10-08 21:04:41.159372] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.159911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.159983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.160035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.160273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.160515] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.160538] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.160553] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.168299] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.178496] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.179028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.179099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.179168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.179406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.179660] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.179683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.179698] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.187400] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.197592] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.198120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.198191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.198243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.198482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.198736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.198760] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.198776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.206489] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.216674] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.217184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.217214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.217232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.217469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.217724] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.217748] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.217764] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.222688] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.235567] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.236124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.236195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.236248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.236497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.236749] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.236779] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.236795] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.244485] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.254724] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.255254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.255324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.255375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.255616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.255869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.255893] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.255908] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.263611] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.273799] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.274317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.591 [2024-10-08 21:04:41.274386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.591 [2024-10-08 21:04:41.274434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.591 [2024-10-08 21:04:41.274692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.591 [2024-10-08 21:04:41.274935] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.591 [2024-10-08 21:04:41.274958] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.591 [2024-10-08 21:04:41.274973] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.591 [2024-10-08 21:04:41.282699] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.591 [2024-10-08 21:04:41.292872] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.591 [2024-10-08 21:04:41.293371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.592 [2024-10-08 21:04:41.293441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.592 [2024-10-08 21:04:41.293495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.592 [2024-10-08 21:04:41.293747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.592 [2024-10-08 21:04:41.293990] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.592 [2024-10-08 21:04:41.294013] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.592 [2024-10-08 21:04:41.294028] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.592 [2024-10-08 21:04:41.301761] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.592 [2024-10-08 21:04:41.311962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.592 [2024-10-08 21:04:41.312500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.592 [2024-10-08 21:04:41.312569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.592 [2024-10-08 21:04:41.312620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.592 [2024-10-08 21:04:41.312869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.592 [2024-10-08 21:04:41.313112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.592 [2024-10-08 21:04:41.313134] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.592 [2024-10-08 21:04:41.313150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.592 [2024-10-08 21:04:41.320887] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.592 [2024-10-08 21:04:41.331075] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.592 [2024-10-08 21:04:41.331568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.592 [2024-10-08 21:04:41.331637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.592 [2024-10-08 21:04:41.331697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.592 [2024-10-08 21:04:41.331937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.592 [2024-10-08 21:04:41.332179] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.592 [2024-10-08 21:04:41.332201] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.592 [2024-10-08 21:04:41.332216] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.592 [2024-10-08 21:04:41.339948] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.592 [2024-10-08 21:04:41.350104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.592 [2024-10-08 21:04:41.350657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.592 [2024-10-08 21:04:41.350695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.592 [2024-10-08 21:04:41.350714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.592 [2024-10-08 21:04:41.350968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.592 [2024-10-08 21:04:41.351212] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.592 [2024-10-08 21:04:41.351235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.592 [2024-10-08 21:04:41.351250] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.358637] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.368870] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.369403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.369477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.369529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.369786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.370030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.370053] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.370068] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.377812] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.388016] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.388545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.388616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.388695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.388937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.389179] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.389202] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.389218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.396967] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.407144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.407684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.407755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.407805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.408044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.408286] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.408308] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.408324] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.416049] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.426238] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.426783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.426855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.426907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.427146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.427388] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.427410] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.427432] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.435177] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.444914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.445520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.445590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.445643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.446019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.446261] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.446284] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.446299] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.454081] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.463787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.464309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.464381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.464433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.464685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.464928] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.464951] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.464966] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.472692] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.481221] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.481736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.481807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.481862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.482101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.482342] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.482366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.482381] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.490119] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.500319] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.500864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.500933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.500985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.501224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.501466] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.852 [2024-10-08 21:04:41.501488] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.852 [2024-10-08 21:04:41.501503] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.852 [2024-10-08 21:04:41.509238] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.852 [2024-10-08 21:04:41.519432] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.852 [2024-10-08 21:04:41.519919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.852 [2024-10-08 21:04:41.519989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.852 [2024-10-08 21:04:41.520039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.852 [2024-10-08 21:04:41.520277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.852 [2024-10-08 21:04:41.520519] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.853 [2024-10-08 21:04:41.520541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.853 [2024-10-08 21:04:41.520556] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.853 [2024-10-08 21:04:41.528315] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.853 [2024-10-08 21:04:41.538505] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.853 [2024-10-08 21:04:41.541275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.853 [2024-10-08 21:04:41.541358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.853 [2024-10-08 21:04:41.541412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.853 [2024-10-08 21:04:41.541662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.853 [2024-10-08 21:04:41.541910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.853 [2024-10-08 21:04:41.541934] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.853 [2024-10-08 21:04:41.541949] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.853 [2024-10-08 21:04:41.548848] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:12.853 [2024-10-08 21:04:41.549479] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.853 [2024-10-08 21:04:41.567794] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.853 [2024-10-08 21:04:41.568324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.853 [2024-10-08 21:04:41.568397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.853 [2024-10-08 21:04:41.568451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.853 [2024-10-08 21:04:41.568714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.853 [2024-10-08 21:04:41.568957] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.853 [2024-10-08 21:04:41.568980] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.853 [2024-10-08 21:04:41.568995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.853 [2024-10-08 21:04:41.576708] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.853 [2024-10-08 21:04:41.586914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.853 [2024-10-08 21:04:41.587422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.853 [2024-10-08 21:04:41.587494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.853 [2024-10-08 21:04:41.587534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.853 [2024-10-08 21:04:41.587799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.853 [2024-10-08 21:04:41.588042] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.853 [2024-10-08 21:04:41.588066] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.853 [2024-10-08 21:04:41.588081] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.853 [2024-10-08 21:04:41.595822] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.853 [2024-10-08 21:04:41.604777] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.853 [2024-10-08 21:04:41.605289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.853 [2024-10-08 21:04:41.605360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:12.853 [2024-10-08 21:04:41.605413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:12.853 [2024-10-08 21:04:41.605662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:12.853 [2024-10-08 21:04:41.605906] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.853 [2024-10-08 21:04:41.605928] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.853 [2024-10-08 21:04:41.605944] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.853 [2024-10-08 21:04:41.613463] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.623562] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.624095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.624172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.624227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.624467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.624735] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.624760] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.624781] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.632472] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.642745] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.643309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.643381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.643434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.643692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.643936] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.643958] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.643974] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.651750] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.661484] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.661968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.662039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.662079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.662348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.662590] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.662613] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.662628] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1861845 Killed "${NVMF_APP[@]}" "$@" 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:13.114 [2024-10-08 21:04:41.670360] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1862866 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1862866 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1862866 ']' 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.114 [2024-10-08 21:04:41.680554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:13.114 [2024-10-08 21:04:41.681066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.681137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 21:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.114 [2024-10-08 21:04:41.681177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.681433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.681702] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.681726] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.681741] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.687777] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.697723] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.698123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.698156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.698175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.698414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.698667] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.698692] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.698707] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.705271] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.714806] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.715449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.715518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.715559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.715933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.716482] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.716533] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.716568] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.721648] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.731379] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.732030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.732061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.732079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.732318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.732561] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.732584] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.732599] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.736347] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.748775] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.749436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.749506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.749548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.114 [2024-10-08 21:04:41.749926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.114 [2024-10-08 21:04:41.750456] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.114 [2024-10-08 21:04:41.750508] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.114 [2024-10-08 21:04:41.750543] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.114 [2024-10-08 21:04:41.757469] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.114 [2024-10-08 21:04:41.763169] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:13.114 [2024-10-08 21:04:41.763316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:13.114 [2024-10-08 21:04:41.765880] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.114 [2024-10-08 21:04:41.766645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.114 [2024-10-08 21:04:41.766713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.114 [2024-10-08 21:04:41.766732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.767055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.767601] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.767669] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.767711] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.774484] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.115 [2024-10-08 21:04:41.784699] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.115 [2024-10-08 21:04:41.785194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.115 [2024-10-08 21:04:41.785265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.115 [2024-10-08 21:04:41.785318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.785557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.785810] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.785834] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.785849] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.793563] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.115 [2024-10-08 21:04:41.803787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.115 [2024-10-08 21:04:41.804292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.115 [2024-10-08 21:04:41.804361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.115 [2024-10-08 21:04:41.804415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.804663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.804906] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.804929] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.804945] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.812687] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.115 [2024-10-08 21:04:41.822897] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.115 [2024-10-08 21:04:41.823362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.115 [2024-10-08 21:04:41.823431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.115 [2024-10-08 21:04:41.823486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.823742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.823985] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.824008] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.824024] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.831748] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.115 [2024-10-08 21:04:41.841951] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.115 [2024-10-08 21:04:41.842422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.115 [2024-10-08 21:04:41.842490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.115 [2024-10-08 21:04:41.842530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.842797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.843046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.843069] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.843084] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.850868] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.115 [2024-10-08 21:04:41.860783] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.115 [2024-10-08 21:04:41.861284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.115 [2024-10-08 21:04:41.861354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.115 [2024-10-08 21:04:41.861408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.115 [2024-10-08 21:04:41.861647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.115 [2024-10-08 21:04:41.861897] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.115 [2024-10-08 21:04:41.861920] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.115 [2024-10-08 21:04:41.861936] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.115 [2024-10-08 21:04:41.869712] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.879286] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.879851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.879894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.879918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.880196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.373 [2024-10-08 21:04:41.880449] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.373 [2024-10-08 21:04:41.880474] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.373 [2024-10-08 21:04:41.880490] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.373 [2024-10-08 21:04:41.888232] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.898512] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.899017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.899089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.899131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.899387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.373 [2024-10-08 21:04:41.899629] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.373 [2024-10-08 21:04:41.899661] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.373 [2024-10-08 21:04:41.899680] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.373 [2024-10-08 21:04:41.907438] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.917626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.917886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:13.373 [2024-10-08 21:04:41.918095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.918127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.918146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.918384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.373 [2024-10-08 21:04:41.918627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.373 [2024-10-08 21:04:41.918658] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.373 [2024-10-08 21:04:41.918676] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.373 [2024-10-08 21:04:41.926468] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.936717] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.937298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.937381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.937434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.937692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.373 [2024-10-08 21:04:41.937939] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.373 [2024-10-08 21:04:41.937963] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.373 [2024-10-08 21:04:41.937980] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.373 [2024-10-08 21:04:41.945726] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.955493] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.955977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.956049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.956089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.956343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.373 [2024-10-08 21:04:41.956586] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.373 [2024-10-08 21:04:41.956609] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.373 [2024-10-08 21:04:41.956624] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.373 [2024-10-08 21:04:41.964302] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.373 [2024-10-08 21:04:41.974493] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.373 [2024-10-08 21:04:41.974985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.373 [2024-10-08 21:04:41.975073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.373 [2024-10-08 21:04:41.975127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.373 [2024-10-08 21:04:41.975366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:41.975608] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:41.975632] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:41.975648] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:41.983394] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:41.991549] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:41.992017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:41.992049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:41.992067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:41.992304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:41.992546] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:41.992569] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:41.992585] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.000305] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.010496] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.011044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.011114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.011168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.011407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.011658] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.011683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.011706] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.019429] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.029482] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.030086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.030165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.030222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.030467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.030736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.030761] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.030778] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.038505] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.048243] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.048807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.048903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.048957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.049201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.049445] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.049468] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.049484] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.057270] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.066988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.067583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.067673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.067711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.067949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.068191] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.068214] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.068231] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.075961] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.085978] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.086480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.086550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.086602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.086852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.087096] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.087118] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.087134] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.094891] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.105074] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.105639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.105734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.105789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.106028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.106270] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.106292] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.374 [2024-10-08 21:04:42.106307] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.374 [2024-10-08 21:04:42.114012] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.374 [2024-10-08 21:04:42.123222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:13.374 [2024-10-08 21:04:42.123298] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:13.374 [2024-10-08 21:04:42.123339] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:13.374 [2024-10-08 21:04:42.123378] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:13.374 [2024-10-08 21:04:42.123391] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:13.374 [2024-10-08 21:04:42.123760] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.374 [2024-10-08 21:04:42.124214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.374 [2024-10-08 21:04:42.124284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.374 [2024-10-08 21:04:42.124325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.374 [2024-10-08 21:04:42.124580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.374 [2024-10-08 21:04:42.124622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:13.374 [2024-10-08 21:04:42.124675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:13.374 [2024-10-08 21:04:42.124680] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.374 [2024-10-08 21:04:42.124834] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.374 [2024-10-08 21:04:42.124857] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.375 [2024-10-08 21:04:42.124872] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.375 [2024-10-08 21:04:42.128446] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.137881] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.138454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.138497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.138525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.138829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.139110] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.139137] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.139155] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.142792] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.153677] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 3629.83 IOPS, 14.18 MiB/s [2024-10-08T19:04:42.397Z] [2024-10-08 21:04:42.154256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.154298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.154328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.154580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.154840] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.154865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.154883] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.158480] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.167791] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.168366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.168420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.168442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.168710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.168958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.168981] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.169000] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.172574] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.181904] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.182465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.182506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.182527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.182801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.183049] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.183073] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.183090] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.186689] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.195982] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.196519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.196556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.196585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.196845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.197092] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.197116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.197133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.200714] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.210019] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.210561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.210602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.210634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.210891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.211139] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.211163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.211182] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.214805] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.224104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.224578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.224610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.224628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.634 [2024-10-08 21:04:42.224877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.634 [2024-10-08 21:04:42.225120] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.634 [2024-10-08 21:04:42.225143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.634 [2024-10-08 21:04:42.225159] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.634 [2024-10-08 21:04:42.228742] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.634 [2024-10-08 21:04:42.238028] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.634 [2024-10-08 21:04:42.238477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.634 [2024-10-08 21:04:42.238516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.634 [2024-10-08 21:04:42.238534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.238789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.239032] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.239055] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.239072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.242647] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.251962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.252401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.252441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.252459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.252709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.252952] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.252974] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.252990] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.256561] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.265853] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.266302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.266334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.266352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.266591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.266844] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.266868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.266883] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.270452] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.279734] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.280180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.280211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.280228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.280466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.280725] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.280749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.280765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.284336] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.293613] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.294052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.294092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.294109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.294347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.294590] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.294613] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.294628] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.298209] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.307491] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.307932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.307963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.307980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.308218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.308460] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.308483] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.308499] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.312080] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.321354] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.321827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.321859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.321878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.322115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.322357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.322380] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.322395] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.325981] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.335273] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.335728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.335760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.335778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.336016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.336258] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.336281] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.336296] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.339878] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.349152] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.349632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.349671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.349690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.349928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.350170] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.350193] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.350209] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.353807] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.363084] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.363524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.363564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.363581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.363830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.364073] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.364096] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.364111] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.367711] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.376997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.377442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.377473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.635 [2024-10-08 21:04:42.377496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.635 [2024-10-08 21:04:42.377744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.635 [2024-10-08 21:04:42.377987] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.635 [2024-10-08 21:04:42.378010] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.635 [2024-10-08 21:04:42.378025] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.635 [2024-10-08 21:04:42.381596] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.635 [2024-10-08 21:04:42.390949] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.635 [2024-10-08 21:04:42.391477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.635 [2024-10-08 21:04:42.391518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.636 [2024-10-08 21:04:42.391547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.636 [2024-10-08 21:04:42.391813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.636 [2024-10-08 21:04:42.392061] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.636 [2024-10-08 21:04:42.392085] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.636 [2024-10-08 21:04:42.392100] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.395899] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.404992] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.405454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.405487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.405506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.405756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.405999] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.406023] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.406038] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.409611] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.418893] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.419340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.419372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.419390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.419627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.419879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.419909] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.419925] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.423495] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.432784] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.433237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.433268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.433285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.433523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.433775] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.433800] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.433815] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.437107] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.446341] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.446751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.446779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.446795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.447038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.447249] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.447268] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.447281] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.450470] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.459883] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.460368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.460395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.460424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.460647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.460875] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.460896] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.460909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.464113] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.897 [2024-10-08 21:04:42.473332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.897 [2024-10-08 21:04:42.473750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.897 [2024-10-08 21:04:42.473779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.897 [2024-10-08 21:04:42.473796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.897 [2024-10-08 21:04:42.474023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.897 [2024-10-08 21:04:42.474234] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.897 [2024-10-08 21:04:42.474255] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.897 [2024-10-08 21:04:42.474268] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.897 [2024-10-08 21:04:42.477479] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.486872] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.487337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.487379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.487394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.487602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.487843] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.487864] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.487879] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.491149] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.500466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.500912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.500959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.500975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.501183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.501394] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.501414] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.501427] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.504620] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.513928] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.514349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.514375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.514405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.514618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.514860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.514882] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.514896] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.518154] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.527451] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.527892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.527936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.527952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.528175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.528387] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.528407] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.528420] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.531601] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.540987] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.541396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.541438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.541453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.541700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.541919] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.541940] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.541953] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.545199] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.554414] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.554851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.554894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.554910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.555134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.555345] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.555365] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.555384] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.558575] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.567991] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.568410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.568436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.568451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.568701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.568920] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.568941] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.568969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.572145] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.581520] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.581978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.582004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.582035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.582242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.582453] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.582473] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.582486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.585675] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.595080] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.595502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.595529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.595558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.595798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.596030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.596050] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.596063] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.599264] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.608606] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.609050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.609092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.609107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.609329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.609540] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.609559] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.609573] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.898 [2024-10-08 21:04:42.612745] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.898 [2024-10-08 21:04:42.622131] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.898 [2024-10-08 21:04:42.622515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.898 [2024-10-08 21:04:42.622556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.898 [2024-10-08 21:04:42.622570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.898 [2024-10-08 21:04:42.622831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.898 [2024-10-08 21:04:42.623061] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.898 [2024-10-08 21:04:42.623082] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.898 [2024-10-08 21:04:42.623095] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.899 [2024-10-08 21:04:42.626285] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.899 [2024-10-08 21:04:42.635847] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.899 [2024-10-08 21:04:42.636223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.899 [2024-10-08 21:04:42.636265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.899 [2024-10-08 21:04:42.636281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.899 [2024-10-08 21:04:42.636510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.899 [2024-10-08 21:04:42.636736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.899 [2024-10-08 21:04:42.636757] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.899 [2024-10-08 21:04:42.636771] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.899 [2024-10-08 21:04:42.640023] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.899 [2024-10-08 21:04:42.649381] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.899 [2024-10-08 21:04:42.649765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.899 [2024-10-08 21:04:42.649793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:13.899 [2024-10-08 21:04:42.649809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:13.899 [2024-10-08 21:04:42.650022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:13.899 [2024-10-08 21:04:42.650246] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.899 [2024-10-08 21:04:42.650267] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.899 [2024-10-08 21:04:42.650280] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.899 [2024-10-08 21:04:42.653697] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.662987] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.663412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.663441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.167 [2024-10-08 21:04:42.663472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.167 [2024-10-08 21:04:42.663718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.167 [2024-10-08 21:04:42.663945] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.167 [2024-10-08 21:04:42.663975] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.167 [2024-10-08 21:04:42.664007] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.167 [2024-10-08 21:04:42.667304] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.676686] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.677139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.677182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.167 [2024-10-08 21:04:42.677198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.167 [2024-10-08 21:04:42.677409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.167 [2024-10-08 21:04:42.677628] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.167 [2024-10-08 21:04:42.677675] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.167 [2024-10-08 21:04:42.677690] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.167 [2024-10-08 21:04:42.680983] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.690249] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.690655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.690700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.167 [2024-10-08 21:04:42.690717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.167 [2024-10-08 21:04:42.690931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.167 [2024-10-08 21:04:42.691169] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.167 [2024-10-08 21:04:42.691191] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.167 [2024-10-08 21:04:42.691205] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.167 [2024-10-08 21:04:42.694467] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.703835] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.704200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.704242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.167 [2024-10-08 21:04:42.704258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.167 [2024-10-08 21:04:42.704480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.167 [2024-10-08 21:04:42.704720] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.167 [2024-10-08 21:04:42.704742] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.167 [2024-10-08 21:04:42.704756] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.167 [2024-10-08 21:04:42.708024] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.717449] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.717893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.717934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.167 [2024-10-08 21:04:42.717951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.167 [2024-10-08 21:04:42.718174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.167 [2024-10-08 21:04:42.718386] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.167 [2024-10-08 21:04:42.718406] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.167 [2024-10-08 21:04:42.718419] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.167 [2024-10-08 21:04:42.721613] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.167 [2024-10-08 21:04:42.730921] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.167 [2024-10-08 21:04:42.731340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.167 [2024-10-08 21:04:42.731381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.731396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.731617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.731860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.731882] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.731895] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.735103] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.744503] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.744931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.744959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.744980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.745195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.745431] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.745452] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.745466] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.748802] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.757985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.758380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.758407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.758423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.758645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.758875] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.758895] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.758909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.762179] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.771776] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.772147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.772190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.772205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.772427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.772671] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.772693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.772706] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.775969] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.785571] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.785965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.786010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.786026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.786234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.786463] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.786491] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.786505] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.789770] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.798970] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.799325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.799353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.799368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.799576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.799818] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.799840] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.799854] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.803076] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.812461] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.812836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.812879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.812895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.813134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.813345] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.813365] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.813378] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.816577] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.826019] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.826432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.826458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.826474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.826724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.826959] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.826979] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.826993] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.830186] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.839598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.840021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.840050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.840066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.840273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.840484] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.840504] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.840517] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.843725] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.853170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.853501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.853529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.853545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.853793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.854026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.854048] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.854062] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.857278] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.866719] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.867073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.168 [2024-10-08 21:04:42.867102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.168 [2024-10-08 21:04:42.867133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.168 [2024-10-08 21:04:42.867361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.168 [2024-10-08 21:04:42.867579] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.168 [2024-10-08 21:04:42.867601] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.168 [2024-10-08 21:04:42.867615] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.168 [2024-10-08 21:04:42.870871] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.168 [2024-10-08 21:04:42.880304] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.168 [2024-10-08 21:04:42.880693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.169 [2024-10-08 21:04:42.880721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.169 [2024-10-08 21:04:42.880743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.169 [2024-10-08 21:04:42.880959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.169 [2024-10-08 21:04:42.881177] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.169 [2024-10-08 21:04:42.881198] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.169 [2024-10-08 21:04:42.881212] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.169 [2024-10-08 21:04:42.884483] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.169 [2024-10-08 21:04:42.893908] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.169 [2024-10-08 21:04:42.894322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.169 [2024-10-08 21:04:42.894350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.169 [2024-10-08 21:04:42.894367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.169 [2024-10-08 21:04:42.894581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.169 [2024-10-08 21:04:42.894809] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.169 [2024-10-08 21:04:42.894830] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.169 [2024-10-08 21:04:42.894845] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.169 [2024-10-08 21:04:42.898111] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.169 [2024-10-08 21:04:42.907560] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.169 [2024-10-08 21:04:42.907903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.169 [2024-10-08 21:04:42.907931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.169 [2024-10-08 21:04:42.907947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.169 [2024-10-08 21:04:42.908161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.169 [2024-10-08 21:04:42.908379] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.169 [2024-10-08 21:04:42.908400] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.169 [2024-10-08 21:04:42.908413] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.169 [2024-10-08 21:04:42.911672] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.169 [2024-10-08 21:04:42.921269] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.169 [2024-10-08 21:04:42.921632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.169 [2024-10-08 21:04:42.921689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.169 [2024-10-08 21:04:42.921712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.169 [2024-10-08 21:04:42.921942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.169 [2024-10-08 21:04:42.922164] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.169 [2024-10-08 21:04:42.922195] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.169 [2024-10-08 21:04:42.922211] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.445 [2024-10-08 21:04:42.925696] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.445 [2024-10-08 21:04:42.934943] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.445 [2024-10-08 21:04:42.935302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.445 [2024-10-08 21:04:42.935333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.445 [2024-10-08 21:04:42.935351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.445 [2024-10-08 21:04:42.935566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.445 [2024-10-08 21:04:42.935807] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.445 [2024-10-08 21:04:42.935831] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.445 [2024-10-08 21:04:42.935845] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.445 [2024-10-08 21:04:42.939122] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.445 [2024-10-08 21:04:42.948560] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.445 [2024-10-08 21:04:42.948937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.445 [2024-10-08 21:04:42.948966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.445 [2024-10-08 21:04:42.948982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.445 [2024-10-08 21:04:42.949196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.445 [2024-10-08 21:04:42.949414] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.445 [2024-10-08 21:04:42.949437] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.445 [2024-10-08 21:04:42.949451] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.445 [2024-10-08 21:04:42.952724] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.445 [2024-10-08 21:04:42.962159] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.445 [2024-10-08 21:04:42.962664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.445 [2024-10-08 21:04:42.962701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.445 [2024-10-08 21:04:42.962717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.445 [2024-10-08 21:04:42.962931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.445 [2024-10-08 21:04:42.963149] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.445 [2024-10-08 21:04:42.963170] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.445 [2024-10-08 21:04:42.963184] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.445 [2024-10-08 21:04:42.966442] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.445 [2024-10-08 21:04:42.975748] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.445 [2024-10-08 21:04:42.976139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.445 [2024-10-08 21:04:42.976167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.445 [2024-10-08 21:04:42.976188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:42.976401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:42.976619] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:42.976640] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:42.976662] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:42.979877] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 21:04:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.446 21:04:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:14.446 21:04:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:14.446 21:04:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.446 21:04:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.446 [2024-10-08 21:04:42.989252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:42.989646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:42.989681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:42.989697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:42.989911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:42.990130] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:42.990150] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:42.990164] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:42.993447] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 [2024-10-08 21:04:43.002869] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.003270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.003299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.003315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.003529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.003758] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.003780] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.003794] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:43.007041] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 [2024-10-08 21:04:43.016386] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.016764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.016792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.016808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.017022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.017240] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.017261] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.017275] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.446 [2024-10-08 21:04:43.020511] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 [2024-10-08 21:04:43.024949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.446 [2024-10-08 21:04:43.029961] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.030381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.030406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.030421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.030661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.030880] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.030900] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.030914] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:43.034068] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 [2024-10-08 21:04:43.043362] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.043796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.043825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.043841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.044055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.044273] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.044293] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.044307] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:43.047544] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.446 [2024-10-08 21:04:43.056986] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.057459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.057504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.057521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.057748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.057969] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.057990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.058005] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:43.061276] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 [2024-10-08 21:04:43.070507] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.071073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.071126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.071147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.071370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.071591] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.071613] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.071630] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 Malloc0 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.446 [2024-10-08 21:04:43.074903] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.446 [2024-10-08 21:04:43.084071] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.446 [2024-10-08 21:04:43.084533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.446 [2024-10-08 21:04:43.084574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581100 with addr=10.0.0.2, port=4420 00:37:14.446 [2024-10-08 21:04:43.084608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581100 is same with the state(6) to be set 00:37:14.446 [2024-10-08 21:04:43.084832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581100 (9): Bad file descriptor 00:37:14.446 [2024-10-08 21:04:43.085051] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.446 [2024-10-08 21:04:43.085071] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.446 [2024-10-08 21:04:43.085085] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.446 [2024-10-08 21:04:43.088350] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.446 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.447 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.447 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.447 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.447 [2024-10-08 21:04:43.094300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.447 [2024-10-08 21:04:43.097584] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.447 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.447 21:04:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1862134 00:37:14.447 3111.29 IOPS, 12.15 MiB/s [2024-10-08T19:04:43.210Z] [2024-10-08 21:04:43.172128] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:16.754 3759.62 IOPS, 14.69 MiB/s [2024-10-08T19:04:46.450Z] 4304.78 IOPS, 16.82 MiB/s [2024-10-08T19:04:47.384Z] 4736.20 IOPS, 18.50 MiB/s [2024-10-08T19:04:48.318Z] 5096.82 IOPS, 19.91 MiB/s [2024-10-08T19:04:49.255Z] 5389.25 IOPS, 21.05 MiB/s [2024-10-08T19:04:50.188Z] 5641.15 IOPS, 22.04 MiB/s [2024-10-08T19:04:51.562Z] 5854.57 IOPS, 22.87 MiB/s [2024-10-08T19:04:51.562Z] 6051.20 IOPS, 23.64 MiB/s 00:37:22.799 Latency(us) 00:37:22.799 [2024-10-08T19:04:51.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.799 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:22.799 Verification LBA range: start 0x0 length 0x4000 00:37:22.799 Nvme1n1 : 15.01 6052.28 23.64 6668.73 0.00 10029.31 831.34 27379.48 00:37:22.799 [2024-10-08T19:04:51.562Z] =================================================================================================================== 00:37:22.799 [2024-10-08T19:04:51.562Z] Total : 6052.28 23.64 6668.73 0.00 10029.31 831.34 27379.48 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:22.799 rmmod nvme_tcp 00:37:22.799 rmmod nvme_fabrics 00:37:22.799 rmmod nvme_keyring 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1862866 ']' 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1862866 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1862866 ']' 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1862866 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:22.799 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1862866 00:37:23.059 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:23.059 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:23.059 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1862866' 00:37:23.059 killing process with pid 1862866 00:37:23.059 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1862866 00:37:23.059 21:04:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1862866 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.318 21:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:25.858 00:37:25.858 real 0m24.111s 00:37:25.858 user 1m1.644s 00:37:25.858 sys 0m5.492s 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.858 ************************************ 00:37:25.858 END TEST nvmf_bdevperf 00:37:25.858 ************************************ 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.858 ************************************ 00:37:25.858 START TEST nvmf_target_disconnect 00:37:25.858 ************************************ 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:25.858 * Looking for test storage... 00:37:25.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.858 --rc genhtml_branch_coverage=1 00:37:25.858 --rc genhtml_function_coverage=1 00:37:25.858 --rc genhtml_legend=1 00:37:25.858 --rc geninfo_all_blocks=1 00:37:25.858 --rc geninfo_unexecuted_blocks=1 00:37:25.858 00:37:25.858 ' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.858 --rc genhtml_branch_coverage=1 00:37:25.858 --rc genhtml_function_coverage=1 00:37:25.858 --rc genhtml_legend=1 00:37:25.858 --rc geninfo_all_blocks=1 00:37:25.858 --rc geninfo_unexecuted_blocks=1 00:37:25.858 00:37:25.858 ' 00:37:25.858 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.859 --rc genhtml_branch_coverage=1 00:37:25.859 --rc genhtml_function_coverage=1 00:37:25.859 --rc genhtml_legend=1 00:37:25.859 --rc geninfo_all_blocks=1 00:37:25.859 --rc geninfo_unexecuted_blocks=1 00:37:25.859 00:37:25.859 ' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:25.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.859 --rc genhtml_branch_coverage=1 00:37:25.859 --rc genhtml_function_coverage=1 00:37:25.859 --rc genhtml_legend=1 00:37:25.859 --rc geninfo_all_blocks=1 00:37:25.859 --rc geninfo_unexecuted_blocks=1 00:37:25.859 00:37:25.859 ' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:25.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:25.859 21:04:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.153 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.153 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.153 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.153 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:29.154 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:29.154 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:29.154 Found net devices under 0000:84:00.0: cvl_0_0 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:29.154 Found net devices under 0000:84:00.1: cvl_0_1 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.154 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:37:29.155 00:37:29.155 --- 10.0.0.2 ping statistics --- 00:37:29.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.155 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:37:29.155 00:37:29.155 --- 10.0.0.1 ping statistics --- 00:37:29.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.155 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.155 ************************************ 00:37:29.155 START TEST nvmf_target_disconnect_tc1 00:37:29.155 ************************************ 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.155 [2024-10-08 21:04:57.752971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.155 [2024-10-08 21:04:57.753173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06620 with addr=10.0.0.2, port=4420 00:37:29.155 [2024-10-08 21:04:57.753295] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:29.155 [2024-10-08 21:04:57.753370] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:29.155 [2024-10-08 21:04:57.753406] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:29.155 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:29.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:29.155 Initializing NVMe Controllers 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.155 00:37:29.155 real 0m0.212s 00:37:29.155 user 0m0.099s 00:37:29.155 sys 0m0.111s 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:29.155 ************************************ 00:37:29.155 END TEST nvmf_target_disconnect_tc1 00:37:29.155 ************************************ 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.155 ************************************ 00:37:29.155 START TEST nvmf_target_disconnect_tc2 00:37:29.155 ************************************ 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1866100 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1866100 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1866100 ']' 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.155 21:04:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.416 [2024-10-08 21:04:57.919993] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:29.416 [2024-10-08 21:04:57.920095] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.416 [2024-10-08 21:04:58.037871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:29.675 [2024-10-08 21:04:58.257083] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.675 [2024-10-08 21:04:58.257187] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.675 [2024-10-08 21:04:58.257227] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.675 [2024-10-08 21:04:58.257258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.675 [2024-10-08 21:04:58.257285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.675 [2024-10-08 21:04:58.260803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:37:29.675 [2024-10-08 21:04:58.260875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:37:29.675 [2024-10-08 21:04:58.260908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:37:29.675 [2024-10-08 21:04:58.260914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:37:29.675 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.675 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:29.675 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:29.676 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:29.676 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.934 Malloc0 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.934 [2024-10-08 21:04:58.478758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.934 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.935 [2024-10-08 21:04:58.507446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1866248 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.935 21:04:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:31.843 21:05:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1866100 00:37:31.843 21:05:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Write completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 [2024-10-08 21:05:00.533701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.843 starting I/O failed 00:37:31.843 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Read completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 Write completed with error (sct=0, sc=8) 00:37:31.844 starting I/O failed 00:37:31.844 [2024-10-08 21:05:00.534045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:31.844 [2024-10-08 21:05:00.534311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.534383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.534588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.534674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.534846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.534872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.534999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.535071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.535292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.535357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.535535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.535594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.535833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.535859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.536048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.536119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.536370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.536434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.536642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.536719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.536828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.536854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.536940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.536965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.537198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.537262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.537437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.537502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.537713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.537763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.537913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.537955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.538085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.538123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.538272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.538310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.538460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.538512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.538663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.538690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.538813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.538849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.539007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.539071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.539300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.539365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.539558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.539581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.539788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.539814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.539918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.539944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.540067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.540123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.540349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.540412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.540621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.540709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.540816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.540842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.540980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.844 [2024-10-08 21:05:00.541004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.844 qpair failed and we were unable to recover it. 00:37:31.844 [2024-10-08 21:05:00.541154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.541879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.541989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.542052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.542278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.542342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.542519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.542583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.542807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.542839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.542982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.543935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.543960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.544860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.544885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.545934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.545983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.546925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.546966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.845 [2024-10-08 21:05:00.547915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.845 [2024-10-08 21:05:00.547940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.845 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.548093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.548118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.548315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.548369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.548503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.548528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.548672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.548699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.548807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.548856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.549841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.549867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.550948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.550992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.551953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.551978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.552886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.552913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.553835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.553861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.554016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.554041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.554170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.554208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.554338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.554363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.554499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.554523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.846 [2024-10-08 21:05:00.554669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.846 [2024-10-08 21:05:00.554695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.846 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.554793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.554819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.554964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.554990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.555172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.555355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.555525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.555699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.555879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.555993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.556846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.556953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.557953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.557979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.558906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.558932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.559825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.559850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.560009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.560048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.560191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.560216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.560329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.560352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.560472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.560496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.847 [2024-10-08 21:05:00.560688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.847 [2024-10-08 21:05:00.560751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.847 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.560875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.560911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.561064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.561098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.561208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.561268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.561470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.561531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.561735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.561761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.561972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.562040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.562275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.562336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.562541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.562605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.562803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.562830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.562996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.563214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.563389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.563567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.563715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.563868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.563901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.564890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.564936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.565940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.565971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.566910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.566936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.567067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.567091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.567226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.848 [2024-10-08 21:05:00.567252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.848 qpair failed and we were unable to recover it. 00:37:31.848 [2024-10-08 21:05:00.567350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.567376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.567477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.567502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.567643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.567689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.567813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.567838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.567970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.568005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.568160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.568224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.568435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.568499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.568677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.568709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.568811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.568837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.568977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.569175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.569381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.569584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.569759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.569906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.569963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.570137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.570202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.570435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.570498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.570696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.570745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.570841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.570867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.571025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.571088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.571270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.571333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.571571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.571636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.571814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.571841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.572920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.572945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.573869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.573916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.849 [2024-10-08 21:05:00.574759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.849 [2024-10-08 21:05:00.574786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.849 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.574962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.574986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.575853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.575976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.576903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.576992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.577169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.577330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.577589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.577765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.577939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.577974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.578126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.578186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.578387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.578448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.578701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.578750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.578857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.578883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.580147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.580224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.580443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.580509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.580704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.580731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.580837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.580863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.580987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.581050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.581271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.581336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.581568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.581633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.581829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.581854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.581953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.581979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.582156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.582182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.582378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.582439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.582658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.582684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.582819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.582845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.583057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.583120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.583325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.583390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.584747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.584778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.850 qpair failed and we were unable to recover it. 00:37:31.850 [2024-10-08 21:05:00.584886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.850 [2024-10-08 21:05:00.584913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.585093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.585278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.585511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.585736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.585885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.585999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.586061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.586297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.586361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.586566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.586628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.586828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.586859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.586992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.587016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.587166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.587231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.587470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.587536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.587704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.587730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.587862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.587889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.588051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.588117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.588329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.588393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.588594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.588676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.588837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.588863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.588953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.588978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.589124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.589150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.589397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.589462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.589675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.589702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.589831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.589857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.590032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.590095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.590319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.590384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.590564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.590628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.590818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.590844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.590969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.590995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.591169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.591233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.591465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.591529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.591738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.591765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.591887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.591913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.592090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.592153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.592328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.592406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.592636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.592717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.592822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.592850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.592997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.593023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.593143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.593192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.593423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.593487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.593754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.851 [2024-10-08 21:05:00.593781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.851 qpair failed and we were unable to recover it. 00:37:31.851 [2024-10-08 21:05:00.593900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.593956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.594163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.594227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.594462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.594529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.594742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.594766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.594896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.594922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.595136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.595162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.595284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.595309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.595521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.595617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.595802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.595830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.595985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.596011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.596134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.596161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.596295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.596357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.596590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.596689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.596923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.596971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.597197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.597222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.597395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.597459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.597685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.597728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.597840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.597868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.597997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.598032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.598249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.598276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 [2024-10-08 21:05:00.598423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.852 [2024-10-08 21:05:00.598499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:31.852 qpair failed and we were unable to recover it. 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Write completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 [2024-10-08 21:05:00.599095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.852 starting I/O failed 00:37:31.852 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Read completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 Write completed with error (sct=0, sc=8) 00:37:31.853 starting I/O failed 00:37:31.853 [2024-10-08 21:05:00.599800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:31.853 [2024-10-08 21:05:00.599894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.853 [2024-10-08 21:05:00.599933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.853 qpair failed and we were unable to recover it. 00:37:31.853 [2024-10-08 21:05:00.600099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.853 [2024-10-08 21:05:00.600155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.853 qpair failed and we were unable to recover it. 00:37:31.853 [2024-10-08 21:05:00.600306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.853 [2024-10-08 21:05:00.600349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:31.853 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.600461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.600525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.600614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.600639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.600753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.600779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.600877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.600902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.127 [2024-10-08 21:05:00.601822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.127 [2024-10-08 21:05:00.601849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.127 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.601976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.602796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.602953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.603845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.603996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.604176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.604368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.604540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.604707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.604874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.604927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.605086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.605145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.605285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.605342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.605466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.605491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.605622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.605669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.606697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.606729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.606830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.606857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.606964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.606990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.607717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.607748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.607863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.607890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.608838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.608878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.609923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.128 [2024-10-08 21:05:00.609949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.128 qpair failed and we were unable to recover it. 00:37:32.128 [2024-10-08 21:05:00.610086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.610110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.610271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.610296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.610458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.610483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.610659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.610688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.610824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.610864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.610994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.611918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.611985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.612139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.612190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.612326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.612384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.612551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.612576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.612703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.612730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.612855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.612882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.613939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.613980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.614902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.614956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.615948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.615973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.616097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.616137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.616259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.616314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.616454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.616483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.129 qpair failed and we were unable to recover it. 00:37:32.129 [2024-10-08 21:05:00.616626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.129 [2024-10-08 21:05:00.616686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.616785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.616812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.616915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.616974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.617189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.617263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.617471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.617538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.617738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.617765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.617863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.617888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.618021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.618046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.618211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.618237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.618376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.618439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.618669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.618721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.618926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.618995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.619240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.619305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.619544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.619610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.619776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.619803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.619989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.620866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.620973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.621925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.621989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.622224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.622289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.622534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.622600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.622806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.622832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.622952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.622993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.623174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.623238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.623491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.623555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.623776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.623803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.623903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.623929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.624138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.624203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.624436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.624501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.624723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.624749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.624857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.624883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.130 qpair failed and we were unable to recover it. 00:37:32.130 [2024-10-08 21:05:00.624995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.130 [2024-10-08 21:05:00.625035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.625212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.625276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.625490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.625554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.625741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.625768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.625863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.625889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.626026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.626169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.626435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.626686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.626837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.626975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.627039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.627209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.627280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.627490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.627554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.627745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.627771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.627875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.627901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.628036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.628079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.628265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.628330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.628583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.628647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.628794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.628819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.628965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.628990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.629197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.629261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.629492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.629556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.629751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.629777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.629868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.629893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.630038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.630062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.630215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.630279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.630500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.630564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.630756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.630782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.630885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.630911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.631041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.631081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.631225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.631289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.631518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.631583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.631768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.631794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.631884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.631910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.632060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.632125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.633741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.633772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.633877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.633904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.634061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.634125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.634416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.131 [2024-10-08 21:05:00.634481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.131 qpair failed and we were unable to recover it. 00:37:32.131 [2024-10-08 21:05:00.634702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.634745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.634851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.634876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.634973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.634997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.635135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.635159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.635404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.635469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.635667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.635692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.635809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.635834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.635952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.636016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.636257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.636322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.636497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.636563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.636753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.636779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.636877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.636903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.637028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.637067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.637253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.637317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.637569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.637633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.637799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.637824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.637974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.638049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.638346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.638411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.638588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.638670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.638796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.638822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.638956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.638980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.639171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.639235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.639464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.639529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.639729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.639754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.639858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.639882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.640036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.640100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.640358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.640422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.640626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.640707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.640828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.640855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.640974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.641013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.641141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.641216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.641448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.641512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.641694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.641733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.641820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.132 [2024-10-08 21:05:00.641843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.132 qpair failed and we were unable to recover it. 00:37:32.132 [2024-10-08 21:05:00.642005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.642069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.642262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.642285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.642402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.642426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.642611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.642692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.642877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.642902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.643024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.643048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.643208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.643272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.643479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.643503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.643657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.643682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.643880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.643946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.644138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.644161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.644398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.644462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.644701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.644767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.645052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.645076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.645230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.645294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.645473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.645537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.645725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.645750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.645846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.645870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.646071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.646314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.646438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.646637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.646871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.646990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.647014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.647130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.647195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.647410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.647434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.647624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.647711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.647823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.647847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.647995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.648019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.648187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.648251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.648450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.648515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.648719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.648744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.648969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.649033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.649234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.649298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.649534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.649557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.649725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.649749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.649956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.133 [2024-10-08 21:05:00.650022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.133 qpair failed and we were unable to recover it. 00:37:32.133 [2024-10-08 21:05:00.650245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.650269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.650422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.650497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.650781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.650846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.651092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.651116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.651277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.651341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.651548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.651612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.651821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.651846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.651972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.651996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.652227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.652290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.652481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.652506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.652656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.652683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.652817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.652881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.653171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.653194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.653389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.653453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.653690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.653755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.653959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.653996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.654174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.654238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.654470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.654534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.654727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.654765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.654863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.654888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.655013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.655078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.655317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.655340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.655441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.655465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.655646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.655728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.655938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.655962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.656082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.656109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.656345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.656410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.656682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.656734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.656865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.656890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.657039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.657104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.657321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.657344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.657477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.657501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.657641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.657722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.657903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.657927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.658043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.658067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.658250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.134 [2024-10-08 21:05:00.658314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.134 qpair failed and we were unable to recover it. 00:37:32.134 [2024-10-08 21:05:00.658545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.658609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.658792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.658817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.658916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.658968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.659227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.659250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.659399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.659463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.659677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.659743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.659923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.659962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.660108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.660131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.660302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.660365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.660577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.660600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.660742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.660793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.661039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.661103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.661338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.661361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.661494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.661573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.661800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.661867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.662082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.662105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.662287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.662351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.662633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.662715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.662920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.662944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.663089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.663113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.663357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.663421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.663637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.663681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.663781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.663805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.663994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.664058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.664297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.664321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.664421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.664445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.664685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.664750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.664978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.665018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.665135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.665173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.665308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.665383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.665635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.665720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.665817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.665841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.666026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.666090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.666295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.666318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.666476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.666544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.666755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.666781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.666893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.666918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.667020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.667044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.667180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.667238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.135 [2024-10-08 21:05:00.667433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.135 [2024-10-08 21:05:00.667472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.135 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.667613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.667682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.667913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.667977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.668203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.668226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.668361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.668416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.668616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.668697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.668940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.668965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.669114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.669179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.669419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.669484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.669690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.669715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.669862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.669909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.670141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.670205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.670419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.670442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.670549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.670573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.670803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.670869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.671092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.671115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.671299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.671362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.671600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.671678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.671915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.671939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.672108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.672170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.672399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.672463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.672707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.672732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.672844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.672868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.673015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.673079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.673327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.673351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.673529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.673592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.673817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.673843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.673972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.673996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.674223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.674286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.674524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.674588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.674807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.674836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.674985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.675028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.675266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.136 [2024-10-08 21:05:00.675330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.136 qpair failed and we were unable to recover it. 00:37:32.136 [2024-10-08 21:05:00.675549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.675573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.675680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.675704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.675885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.675949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.676166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.676189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.676333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.676384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.676616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.676694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.676901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.676926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.677055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.677079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.677279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.677343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.677583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.677606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.677771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.677836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.678089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.678154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.678376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.678399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.678534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.678587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.678860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.678926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.679123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.679146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.679287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.679311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.679539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.679603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.679832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.679856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.680002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.680064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.680290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.680354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.680585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.680649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.680796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.680820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.680978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.681041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.681255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.681278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.681408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.681432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.681596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.681674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.681808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.681832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.681991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.682015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.682125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.682148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.682364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.682427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.682633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.682716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.682964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.683001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.683175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.683238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.683470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.683534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.683763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.683828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.684071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.684094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.684274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.137 [2024-10-08 21:05:00.684348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.137 qpair failed and we were unable to recover it. 00:37:32.137 [2024-10-08 21:05:00.684558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.684621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.684920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.684984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.685186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.685209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.685310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.685334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.685481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.685545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.685811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.685876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.686109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.686132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.686301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.686324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.686601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.686679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.686892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.686955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.687170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.687193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.687299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.687323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.687537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.687600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.687862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.687926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.688141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.688164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.688324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.688403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.688633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.688714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.688914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.688978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.689223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.689246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.689425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.689487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.689722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.689788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.690025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.690089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.690338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.690361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.690517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.690581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.690812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.690837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.691029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.691094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.691335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.691359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.691463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.691487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.691719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.691785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.691988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.692051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.692287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.692310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.692491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.692555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.692800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.692865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.693072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.693135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.693379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.693402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.693552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.693575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.693801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.693867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.694108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.694171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.138 qpair failed and we were unable to recover it. 00:37:32.138 [2024-10-08 21:05:00.694402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.138 [2024-10-08 21:05:00.694425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.694598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.694625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.694918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.695017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.695254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.695320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.695535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.695559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.695677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.695704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.695921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.695988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.696189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.696253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.696479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.696502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.696657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.696683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.696923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.696987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.697217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.697281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.697476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.697499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.697667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.697720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.697947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.698012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.698256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.698321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.698517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.698541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.698678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.698703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.698888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.698952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.699150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.699213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.699493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.699556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.699766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d35f0 is same with the state(6) to be set 00:37:32.139 [2024-10-08 21:05:00.699900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.699929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.700176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.700242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.700483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.700547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.700783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.700809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.700982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.701046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.701273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.701297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.701418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.701468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.701724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.701790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.702028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.702051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.702202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.702225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.702427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.702491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.702697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.702722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.702869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.702921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.703131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.703195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.703445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.703468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.703623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.703704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.703905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.703970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.704196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.704219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.704397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.139 [2024-10-08 21:05:00.704460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.139 qpair failed and we were unable to recover it. 00:37:32.139 [2024-10-08 21:05:00.704669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.704725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.704858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.704883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.705042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.705091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.705323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.705388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.705614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.705663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.705836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.705900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.706097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.706161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.706351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.706375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.706507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.706531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.706693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.706758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.706996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.707019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.707177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.707241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.707467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.707530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.707743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.707769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.707985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.708059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.708291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.708354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.708537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.708560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.708709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.708750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.708954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.709018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.709215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.709239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.709409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.709447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.709644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.709719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.709959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.709984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.710112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.710158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.710363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.710426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.710668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.710726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.710851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.710876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.711030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.711093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.711342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.711366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.711497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.711566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.711793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.711818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.711975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.712013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.712178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.712241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.712474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.712537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.712768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.712794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.712889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.712914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.713070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.713133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.713357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.713380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.140 [2024-10-08 21:05:00.713573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.140 [2024-10-08 21:05:00.713636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.140 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.713873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.713937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.714161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.714184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.714369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.714434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.714614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.714697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.714923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.714961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.715116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.715180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.715384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.715447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.715691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.715716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.715833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.715890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.716116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.716180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.716394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.716418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.716550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.716574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.716813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.716839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.716979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.717004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.717200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.717263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.717496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.717569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.717811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.717836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.717928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.717968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.718120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.718184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.718411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.718434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.718542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.718566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.718803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.718828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.718957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.718982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.719165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.719229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.719462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.719525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.719756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.719782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.719955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.720019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.720230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.720293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.720551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.720574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.720711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.720760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.720999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.721063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.721311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.141 [2024-10-08 21:05:00.721334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.141 qpair failed and we were unable to recover it. 00:37:32.141 [2024-10-08 21:05:00.721482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.721545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.721798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.721824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.721922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.721946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.722089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.722112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.722245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.722309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.722560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.722583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.722746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.722769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.722992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.723056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.723293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.723316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.723445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.723511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.723758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.723824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.724042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.724065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.724190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.724214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.724431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.724494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.724730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.724755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.724927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.724991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.725228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.725292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.725463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.725486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.725624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.725648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.725816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.725879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.726118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.726141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.726249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.726287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.726444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.726508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.726733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.726765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.726891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.726916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.727146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.727209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.727408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.727431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.727567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.727590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.727786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.727811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.727964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.727988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.728172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.728233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.728432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.728495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.728735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.728760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.728880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.728940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.729113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.729176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.729435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.729458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.729669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.729734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.729981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.730046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.142 qpair failed and we were unable to recover it. 00:37:32.142 [2024-10-08 21:05:00.730266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.142 [2024-10-08 21:05:00.730289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.730429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.730502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.730768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.730833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.731108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.731131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.731390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.731453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.731705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.731770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.732009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.732033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.732169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.732240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.732505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.732569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.732783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.732807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.732922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.732947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.733154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.733219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.733569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.733592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.733792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.733862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.734084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.734148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.734439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.734462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.734704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.734729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.734867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.734891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.735021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.735045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.735185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.735254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.735606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.735684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.735902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.735926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.736033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.736057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.736408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.736472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.736721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.736745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.736903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.736977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.737202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.737266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.737532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.737555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.737694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.737759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.737985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.738048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.738338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.738362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.738621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.738705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.738923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.738987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.739203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.739226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.739437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.739501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.739844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.739910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.740151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.740175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.740272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.740297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.143 [2024-10-08 21:05:00.740525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.143 [2024-10-08 21:05:00.740589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.143 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.740879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.740944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.741128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.741192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.741387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.741452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.741669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.741735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.741898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.741963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.742154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.742220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.742467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.742530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.742767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.742833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.743062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.743086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.743189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.743213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.743338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.743402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.743691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.743717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.743851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.743915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.744172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.744237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.744488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.744512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.744640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.744725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.745097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.745161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.745447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.745470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.745678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.745743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.745969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.746033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.746266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.746289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.746495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.746560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.746776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.746802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.746912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.746938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.747089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.747153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.747421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.747485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.747661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.747705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.747844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.747869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.748125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.748188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.748386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.748420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.748622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.748705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.748895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.748958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.749182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.749205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.749341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.749389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.749624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.749706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.749924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.749962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.750166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.750229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.750392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.750456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.750669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.750719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.144 [2024-10-08 21:05:00.750842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.144 [2024-10-08 21:05:00.750910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.144 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.751202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.751433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.751550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.751746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.751887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.751999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.752022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.752208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.752272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.752508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.752531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.752664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.752708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.752855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.752918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.753223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.753247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.753402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.753466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.753698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.753763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.754089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.754116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.754310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.754374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.754693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.754758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.754993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.755032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.755190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.755254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.755539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.755602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.755830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.755855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.755976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.756001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.756172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.756247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.756524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.756547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.756810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.756874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.757177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.757240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.757510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.757533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.757760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.757824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.758075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.758139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.758340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.758364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.758584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.758648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.758922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.758997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.759203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.759227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.759435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.759498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.759743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.759768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.145 [2024-10-08 21:05:00.759887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.145 [2024-10-08 21:05:00.759912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.145 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.760056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.760121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.760369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.760434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.760731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.760756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.760886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.760957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.761182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.761246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.761496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.761519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.761678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.761744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.761965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.762029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.762331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.762354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.762504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.762568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.762927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.762968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.763095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.763118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.763304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.763328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.763539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.763603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.763852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.763877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.764053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.764117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.764431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.764495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.764830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.764855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.765056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.765130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.765351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.765415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.765681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.765706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.765888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.765952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.766202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.766266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.766573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.766597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.766750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.766829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.767075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.767140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.767412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.767435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.767635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.767715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.767913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.767987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.768195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.768218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.768457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.768521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.768731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.768796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.769001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.769025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.769235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.769299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.769577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.769640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.769855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.769880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.770015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.770039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.770227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.770291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.770607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.770684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.770902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.146 [2024-10-08 21:05:00.770928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.146 qpair failed and we were unable to recover it. 00:37:32.146 [2024-10-08 21:05:00.771129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.771194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.771446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.771469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.771645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.771736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.771862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.771887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.772071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.772095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.772253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.772330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.772646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.772729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.772968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.773006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.773204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.773269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.773499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.773562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.773755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.773789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.773912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.773936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.774099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.774162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.774479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.774502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.774686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.774751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.774960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.775024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.775226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.775250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.775383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.775408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.775647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.775769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.776044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.776067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.776220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.776285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.776482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.776545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.776797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.776822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.776945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.777000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.777291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.777354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.777655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.777679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.777822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.777886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.778119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.778182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.778381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.778404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.778550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.778574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.778789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.778815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.778980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.779019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.779251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.779317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.779479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.779543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.779820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.779847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.779997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.780060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.780309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.780374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.780616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.780670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.780821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.780889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.781088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.147 [2024-10-08 21:05:00.781152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.147 qpair failed and we were unable to recover it. 00:37:32.147 [2024-10-08 21:05:00.781402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.781426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.781581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.781645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.781879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.781943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.782144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.782167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.782314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.782362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.782579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.782644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.782880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.782904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.783049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.783095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.783477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.783541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.783783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.783808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.783937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.784000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.784280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.784344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.784578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.784617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.784789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.784855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.785091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.785156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.785375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.785400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.785531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.785588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.785827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.785854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.785982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.786025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.786151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.786176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.786334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.786399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.786629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.786661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.786851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.786916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.787148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.787212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.787449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.787474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.787676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.787742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.788045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.788110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.788352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.788377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.788530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.788594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.788863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.788929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.789130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.789155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.789317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.789364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.789613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.789695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.789905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.789945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.790075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.790123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.790411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.790474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.790719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.790746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.790869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.790946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.791188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.791251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.791604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.148 [2024-10-08 21:05:00.791629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.148 qpair failed and we were unable to recover it. 00:37:32.148 [2024-10-08 21:05:00.791849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.791914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.792248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.792312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.792643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.792727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.792872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.792898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.793047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.793110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.793356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.793395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.793599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.793706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.793887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.793913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.794099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.794124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.794262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.794301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.794534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.794598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.794901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.794927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.795058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.795121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.795370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.795433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.795704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.795738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.795876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.795940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.796172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.796236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.796415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.796441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.796580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.796609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.796983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.797049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.797350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.797376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.797521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.797584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.797823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.797889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.798082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.798106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.798326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.798396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.798700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.798767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.798974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.799014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.799130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.799156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.799338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.799402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.799630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.799661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.799839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.799903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.800139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.800203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.800392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.800418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.800544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.800570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.800858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.800884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.801041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.801066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.149 [2024-10-08 21:05:00.801328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.149 [2024-10-08 21:05:00.801392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.149 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.801592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.801674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.801931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.801957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.802090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.802149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.802391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.802455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.802730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.802756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.802902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.802967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.803186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.803251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.803476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.803517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.803695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.803769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.804049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.804114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.804346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.804371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.804575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.804639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.804922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.804988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.805256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.805296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.805438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.805502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.805807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.805873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.806223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.806249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.806422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.806486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.806741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.806807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.807081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.807106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.807260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.807324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.807574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.807648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.807954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.807980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.808136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.808201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.808401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.808464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.808700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.808726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.808852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.808878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.809088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.809152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.809371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.809396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.809578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.809643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.809909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.809934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.810091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.810117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.810264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.810290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.810455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.810522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.810787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.810814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.810971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.811046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.811308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.811372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.811586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.811611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.150 qpair failed and we were unable to recover it. 00:37:32.150 [2024-10-08 21:05:00.811761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.150 [2024-10-08 21:05:00.811824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.812001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.812064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.812274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.812300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.812406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.812431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.812600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.812680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.812949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.812974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.813172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.813236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.813474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.813538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.813893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.813919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.814117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.814181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.814433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.814497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.814768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.814794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.814939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.815003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.815365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.815429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.815733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.815759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.815901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.815965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.816331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.816394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.816714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.816740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.816931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.816995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.817307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.817372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.817607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.817632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.817768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.817843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.818124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.818190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.818388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.818418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.818523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.818549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.818714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.818780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.819043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.819067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.819271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.819336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.819617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.819696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.820031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.820055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.820211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.820287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.820605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.820683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.821007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.821032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.821171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.821236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.821435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.821500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.821746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.821772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.821960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.822024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.822338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.822403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.822686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.822712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.822885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.822949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.151 [2024-10-08 21:05:00.823189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.151 [2024-10-08 21:05:00.823252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.151 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.823495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.823559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.823834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.823860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.824012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.824076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.824361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.824387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.824607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.824707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.824961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.825024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.825336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.825361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.825527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.825592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.825910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.825975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.826233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.826259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.826445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.826510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.826797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.826863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.827091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.827117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.827208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.827234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.827431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.827496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.827748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.827774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.827960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.828024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.828267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.828332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.828664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.828701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.828927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.828991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.829265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.829329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.829569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.829594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.829684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.829714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.829984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.830048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.830282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.830306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.830423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.830477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.830717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.830782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.831074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.831099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.831232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.831296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.831498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.831562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.831877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.831904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.832059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.832122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.832302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.832366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.832601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.832682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.832885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.832918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.833199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.833263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.833505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.833571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.833894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.833920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.834132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.834196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.834374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.152 [2024-10-08 21:05:00.834400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.152 qpair failed and we were unable to recover it. 00:37:32.152 [2024-10-08 21:05:00.834575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.834633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.834985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.835049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.835224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.835248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.835426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.835467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.835685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.835762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.836055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.836081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.836271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.836335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.836557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.836620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.836927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.836954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.837164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.837228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.837530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.837603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.837849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.837876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.838057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.838129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.838477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.838542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.838860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.838887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.839047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.839112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.839399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.839463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.839769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.839796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.839955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.840019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.840377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.840442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.840727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.840753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.840919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.840983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.841246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.841321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.841605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.841631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.841818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.841883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.842147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.842211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.842517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.842542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.842737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.842764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.842933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.842985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.843306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.843331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.843574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.843637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.843973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.844038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.844349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.844374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.844553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.844625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.844942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.845008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.845321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.845345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.845518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.845583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.845898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.845964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.846267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.846292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.846440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.153 [2024-10-08 21:05:00.846504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.153 qpair failed and we were unable to recover it. 00:37:32.153 [2024-10-08 21:05:00.846752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.846826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.847116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.847142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.847269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.847333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.847621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.847698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.848012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.848056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.848217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.848268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.848581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.848662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.848970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.849010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.849146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.849211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.849493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.849557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.849886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.849912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.850093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.850157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.850428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.850492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.850767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.850798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.851026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.851092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.851411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.851475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.851785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.851811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.852011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.852075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.852361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.852425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.852739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.852766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.852986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.853050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.853370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.853434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.853698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.853729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.853911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.853975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.854251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.854315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.854603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.854629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.854786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.854850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.855146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.855210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.855523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.855549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.855732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.855798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.856000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.856065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.856371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.856397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.856614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.856709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.856881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.856907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.857074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.154 [2024-10-08 21:05:00.857114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.154 qpair failed and we were unable to recover it. 00:37:32.154 [2024-10-08 21:05:00.857334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.857399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.857678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.857744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.858071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.858097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.858272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.858337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.858584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.858648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.858994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.859033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.859230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.859293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.859594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.859672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.859977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.860018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.860208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.860272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.860577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.860641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.860973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.861012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.861152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.861200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.861463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.861527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.861791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.861818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.861975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.862008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.862342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.862406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.862733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.862759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.862997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.863060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.863356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.863420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.863734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.863761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.863919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.863984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.864271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.864334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.864667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.864725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.864855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.864881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.865080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.865145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.865365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.865391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.865528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.865595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.865946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.866047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.866353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.866397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.866682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.866727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.866894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.866924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.867163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.867188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.867396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.867465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.867695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.867763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.868046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.868073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.868223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.868289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.868572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.868667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.868976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.869016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.869245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.869314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.155 qpair failed and we were unable to recover it. 00:37:32.155 [2024-10-08 21:05:00.869611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.155 [2024-10-08 21:05:00.869696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.869981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.870009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.870167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.870195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.870327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.870353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.870514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.870541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.870638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.870725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.871008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.871091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.871395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.871420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.871583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.871611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.871773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.871800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.871981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.872008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.872185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.872264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.872567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.872631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.872957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.872983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.156 [2024-10-08 21:05:00.873157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.156 [2024-10-08 21:05:00.873203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.156 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.873359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.873385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.873545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.873576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.873749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.873778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.873911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.873938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.874104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.874131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.874301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.874329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.874489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.874517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.874648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.874681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.874841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.874868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.875891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.875917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.876882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.876908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.877074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.877101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.877229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.877258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.877426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.877452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.430 [2024-10-08 21:05:00.877578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.430 [2024-10-08 21:05:00.877607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.430 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.877712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.877739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.877937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.878018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.878260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.878326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.878592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.878693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.878873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.878900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.879095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.879177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.879463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.879490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.879701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.879729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.879909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.879974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.880264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.880291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.880467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.880536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.880823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.880890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.881187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.881214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.881362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.881426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.881729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.881799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.882066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.882091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.882300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.882368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.882643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.882725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.882992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.883017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.883191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.883256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.883518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.883585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.883831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.883858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.883980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.884057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.884264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.884328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.884585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.884618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.884818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.884889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.885091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.885155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.885415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.885445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.885550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.885599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.885785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.885854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.886105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.886132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.886299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.431 [2024-10-08 21:05:00.886367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.431 qpair failed and we were unable to recover it. 00:37:32.431 [2024-10-08 21:05:00.886578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.886642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.886941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.886972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.887120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.887204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.887459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.887526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.887775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.887802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.887960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.887985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.888182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.888252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.888511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.888577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.888822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.888854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.889036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.889103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.889349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.889376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.889486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.889563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.889781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.889811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.889959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.889987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.890127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.890195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.890425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.890489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.890730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.890764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.890931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.890996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.891226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.891290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.891523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.891549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.891659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.891686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.891890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.891968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.892203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.892232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.892418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.892498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.892786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.892853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.893154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.893180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.893331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.893410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.893723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.893792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.894064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.894090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.894301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.894366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.894678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.894756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.895094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.895121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.895256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.895322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.895589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.895674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.895957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.895984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.896128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.896192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.896514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.896581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.896851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.896878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.897049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.432 [2024-10-08 21:05:00.897117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.432 qpair failed and we were unable to recover it. 00:37:32.432 [2024-10-08 21:05:00.897377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.897443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.897688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.897744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.897899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.897924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.898091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.898158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.898476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.898506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.898781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.898808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.899059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.899139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.899433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.899459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.899603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.899707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.899997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.900062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.900326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.900353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.900504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.900568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.900867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.900937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.901209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.901233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.901433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.901501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.901782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.901851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.902201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.902229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.902410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.902478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.902767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.902835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.903128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.903156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.903295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.903360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.903644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.903744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.903956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.903996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.904153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.904222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.904453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.904518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.904812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.904839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.905006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.905072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.905379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.905446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.905726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.905757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.905950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.906015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.906269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.906342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.906681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.906734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.906901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.906968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.433 [2024-10-08 21:05:00.907261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.433 [2024-10-08 21:05:00.907326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.433 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.907622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.907724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.907964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.908032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.908299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.908365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.908648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.908682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.908870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.908936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.909215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.909283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.909591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.909621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.909830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.909898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.910180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.910244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.910509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.910534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.910679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.910750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.911025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.911090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.911365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.911390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.911530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.911596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.911844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.911872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.911996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.912022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.912213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.912280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.912556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.912621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.912923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.912950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.913124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.913188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.913469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.913537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.913832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.913876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.914065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.914132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.914413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.914477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.914763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.914789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.914934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.915012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.915321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.915389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.915694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.915736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.915903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.915968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.916219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.916309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.916586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.916613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.916777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.916856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.917138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.917203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.917485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.917517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.917703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.917772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.918017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.918081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.918320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.918343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.434 [2024-10-08 21:05:00.918528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.434 [2024-10-08 21:05:00.918593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.434 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.918885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.918951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.919207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.919230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.919396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.919461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.919730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.919756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.919889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.919913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.920052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.920076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.920343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.920407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.920698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.920723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.920852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.920877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.921021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.921047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.921199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.921225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.921393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.921418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.921619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.921703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.921988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.922012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.922175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.922240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.922491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.922556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.922836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.922861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.923004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.923068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.923355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.923420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.923676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.923702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.923858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.923922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.924175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.924239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.924467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.924490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.924622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.924682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.924971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.925037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.925317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.925341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.925502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.925566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.925876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.925944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.926190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.926213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.926376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.926441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.926708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.926734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.926900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.435 [2024-10-08 21:05:00.926929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.435 qpair failed and we were unable to recover it. 00:37:32.435 [2024-10-08 21:05:00.927133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.927198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.927473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.927538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.927794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.927819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.927990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.928054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.928309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.928374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.928619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.928666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.928870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.928935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.929213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.929277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.929551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.929575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.929733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.929801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.930042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.930106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.930396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.930419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.930609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.930694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.930998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.931064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.931367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.931390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.931542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.931615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.931914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.931980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.932243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.932266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.932456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.932520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.932737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.932805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.933048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.933086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.933266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.933330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.933599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.933683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.934032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.934056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.934221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.934285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.934539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.934604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.934848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.934872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.935042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.935107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.935392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.935457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.935734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.935758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.935912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.935977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.936252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.936316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.936550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.936573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.936863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.936930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.436 [2024-10-08 21:05:00.937180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.436 [2024-10-08 21:05:00.937244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.436 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.937529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.937552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.937722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.937790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.938072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.938136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.938417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.938441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.938605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.938696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.938954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.939018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.939223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.939246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.939396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.939460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.939742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.939808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.940103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.940126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.940308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.940373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.940648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.940728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.941020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.941043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.941192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.941256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.941509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.941573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.941842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.941867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.942052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.942116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.942399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.942463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.942683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.942707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.942884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.942908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.943191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.943256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.943524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.943588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.943847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.943872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.944008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.944072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.944308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.944332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.944469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.944534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.944743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.944809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.945067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.945091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.945276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.945339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.945587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.945677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.945964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.945989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.946152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.946217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.946495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.946559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.946838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.946864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.946955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.946993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.947128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.947193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.947498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.947521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.947680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.437 [2024-10-08 21:05:00.947745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.437 qpair failed and we were unable to recover it. 00:37:32.437 [2024-10-08 21:05:00.947945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.948009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.948302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.948325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.948478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.948542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.948858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.948925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.949192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.949216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.949415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.949479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.949754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.949831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.950108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.950132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.950278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.950342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.950597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.950677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.950928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.950968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.951132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.951196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.951433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.951497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.951767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.951792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.951982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.952047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.952329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.952393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.952711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.952735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.952915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.952969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.953241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.953306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.953567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.953631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.953837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.953861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.954052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.954116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.954385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.954408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.954577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.954642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.954928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.954994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.955268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.955291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.955494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.955559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.955835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.955901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.956144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.956167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.956366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.956430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.956700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.956766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.957075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.957098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.957290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.957354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.957637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.957739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.958027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.958050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.958215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.958278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.958567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.958632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.438 [2024-10-08 21:05:00.958931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.438 [2024-10-08 21:05:00.958955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.438 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.959113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.959178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.959445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.959509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.959755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.959780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.959970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.960034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.960297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.960361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.960669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.960727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.960826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.960851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.961018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.961082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.961318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.961346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.961544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.961609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.961886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.961952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.962230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.962253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.962409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.962472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.962710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.962777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.963060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.963084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.963273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.963337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.963612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.963695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.964028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.964052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.964184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.964207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.964440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.964505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.964753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.964777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.964894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.964933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.965148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.965213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.965484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.965507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.965692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.965758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.966047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.966112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.966358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.966381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.966546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.966611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.966904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.966970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.967250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.967273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.967453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.439 [2024-10-08 21:05:00.967518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.439 qpair failed and we were unable to recover it. 00:37:32.439 [2024-10-08 21:05:00.967796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.967863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.968108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.968132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.968311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.968375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.968623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.968716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.968908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.968949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.969116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.969180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.969420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.969484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.969756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.969781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.969923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.969987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.970226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.970291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.970534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.970557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.970725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.970791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.971065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.971129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.971420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.971443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.971603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.971686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.971975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.972039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.972312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.972335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.972519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.972594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.972891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.972958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.973221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.973244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.973432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.973496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.973776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.973842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.974092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.974115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.974294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.974359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.974666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.974732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.975013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.975036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.975293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.975357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.975637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.975716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.975954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.975992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.976164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.976228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.976508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.976573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.976862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.976888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.977031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.977096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.977341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.977406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.977712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.977737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.977868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.977893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.978055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.978120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.978369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.978392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.978500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.978559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.440 [2024-10-08 21:05:00.978794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.440 [2024-10-08 21:05:00.978862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.440 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.979101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.979124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.979314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.979379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.979647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.979726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.979940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.979964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.980158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.980223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.980479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.980544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.980870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.980896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.981083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.981148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.981417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.981482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.981758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.981782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.981943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.982007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.982251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.982316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.982558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.982622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.982902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.982927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.983182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.983248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.983544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.983608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.983876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.983901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.984026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.984101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.984389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.984413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.984555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.984620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.984921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.984986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.985224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.985247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.985407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.985471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.985717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.985784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.986036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.986060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.986245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.986310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.986586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.986665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.986952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.986976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.987133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.987198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.987470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.987535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.987784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.987809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.987906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.987932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.988133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.988198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.988473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.988496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.988680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.988764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.989044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.989109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.989387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.989410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.989572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.989636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.989875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.989940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.990214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.441 [2024-10-08 21:05:00.990237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.441 qpair failed and we were unable to recover it. 00:37:32.441 [2024-10-08 21:05:00.990396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.990460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.990684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.990750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.991006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.991030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.991231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.991296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.991560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.991625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.991933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.991959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.992130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.992195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.992472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.992536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.992825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.992850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.993022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.993087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.993374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.993438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.993682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.993706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.993860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.993924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.994201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.994264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.994534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.994557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.994754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.994821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.995092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.995156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.995402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.995430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.995540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.995606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.995885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.995950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.996182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.996206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.996366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.996430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.996701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.996767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.997028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.997052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.997204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.997269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.997480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.997543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.997849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.997874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.998003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.998058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.998326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.998391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.998703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.998728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.998869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.998893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.999109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.999174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.999450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.999473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.999627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:00.999708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:00.999934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.000014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.000275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.000298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.000493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.000557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.000857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.000924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.001207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.001231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.001379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.442 [2024-10-08 21:05:01.001442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.442 qpair failed and we were unable to recover it. 00:37:32.442 [2024-10-08 21:05:01.001680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.001746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.002018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.002042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.002202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.002266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.002549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.002613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.002885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.002910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.003069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.003092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.003356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.003420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.003704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.003728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.003873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.003938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.004213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.004278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.004527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.004551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.004763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.004828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.005100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.005164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.005444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.005468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.005707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.005775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.006022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.006087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.006371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.006395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.006544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.006624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.006932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.006997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.007266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.007290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.007547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.007612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.007925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.007986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.008266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.008289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.008507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.008572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.008894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.008960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.009252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.009276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.009509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.009573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.009898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.009965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.010276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.010300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.010530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.010594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.010877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.010943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.011248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.011271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.011485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.011550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.011869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.011936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.012208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.012231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.012422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.012486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.012768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.012835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.013106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.013129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.013263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.013327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.013568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.443 [2024-10-08 21:05:01.013632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.443 qpair failed and we were unable to recover it. 00:37:32.443 [2024-10-08 21:05:01.013927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.013968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.014154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.014218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.014491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.014555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.014841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.014867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.015064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.015129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.015381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.015445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.015678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.015702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.015831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.015855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.015987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.016051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.016275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.016298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.016441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.016496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.016778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.016845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.017122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.017146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.017329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.017393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.017634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.017714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.017976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.018014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.018178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.018242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.018510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.018586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.018887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.018912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.019053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.019118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.019399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.019462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.019736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.019759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.019952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.020016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.020291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.020355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.020591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.020614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.020758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.020830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.021118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.021184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.021456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.021479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.021626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.021712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.021987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.022053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.022290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.022313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.022493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.022558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.022857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.022883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.023050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.023073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.023256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.023320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.444 qpair failed and we were unable to recover it. 00:37:32.444 [2024-10-08 21:05:01.023563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.444 [2024-10-08 21:05:01.023628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.023911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.023935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.024025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.024077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.024311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.024376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.024615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.024638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.024808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.024875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.025114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.025178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.025464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.025488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.025642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.025725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.026013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.026078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.026309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.026333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.026504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.026568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.026883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.026949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.027216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.027239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.027468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.027532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.027810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.027876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.028144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.028167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.028353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.028417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.028699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.028763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.028971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.028995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.029187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.029251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.029527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.029591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.029892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.029922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.030119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.030183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.030467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.030531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.030794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.030818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.030978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.031042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.031293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.031356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.031639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.031718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.031927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.032000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.032170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.032234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.032487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.032550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.032803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.032830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.032976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.033040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.033229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.033252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.033429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.033490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.033784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.033851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.034165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.034192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.034406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.034472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.034745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.034816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.445 [2024-10-08 21:05:01.035071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.445 [2024-10-08 21:05:01.035095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.445 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.035274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.035343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.035648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.035734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.036020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.036045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.036204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.036270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.036533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.036601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.036900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.036926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.037075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.037144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.037421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.037485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.037771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.037800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.037941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.038007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.038298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.038366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.038641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.038692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.038890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.038958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.039234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.039299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.039584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.039609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.039761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.039827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.040068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.040121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.040382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.040423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.040596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.040681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.040890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.040918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.041041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.041067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.041189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.041216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.041376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.041442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.041701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.041748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.041936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.042001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.042291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.042370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.042678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.042705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.042862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.042930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.043184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.043249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.043535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.043562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.043757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.043824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.044112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.044181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.044424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.044450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.044639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.044726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.044942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.045007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.045297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.045325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.045503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.045568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.045849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.045934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.046202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.046234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.446 [2024-10-08 21:05:01.046437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.446 [2024-10-08 21:05:01.046504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.446 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.046783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.046853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.047169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.047197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.047366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.047434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.047732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.047801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.048093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.048120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.048292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.048357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.048635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.048727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.048886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.048913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.049090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.049170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.049462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.049527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.049815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.049843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.050023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.050101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.050393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.050460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.050759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.050789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.050964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.051029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.051318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.051383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.051681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.051709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.051868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.051947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.052204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.052270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.052534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.052561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.052735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.052803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.053094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.053159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.053471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.053496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.053666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.053735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.054052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.054119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.054365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.054392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.054537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.054610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.054904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.054984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.055254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.055279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.055451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.055519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.055795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.055864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.056158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.056185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.056369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.056434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.056685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.056749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.056914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.056940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.057067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.057095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.057346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.057412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.057696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.057740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.057908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.057974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.058268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.447 [2024-10-08 21:05:01.058336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.447 qpair failed and we were unable to recover it. 00:37:32.447 [2024-10-08 21:05:01.058664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.058691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.058928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.058996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.059297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.059362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.059616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.059643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.059809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.059876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.060156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.060223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.060462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.060505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.060676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.060745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.061053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.061131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.061429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.061454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.061635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.061744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.062048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.062114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.062394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.062420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.062575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.062640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.062946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.063026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.063311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.063337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.063544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.063612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.063919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.063986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.064256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.064283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.064455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.064521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.064817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.064845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.065026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.065053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.065315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.065381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.065710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.065792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.066054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.066079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.066296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.066364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.066667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.066740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.067024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.067050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.067251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.067325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.067639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.067729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.068019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.068045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.068231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.068297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.068533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.068610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.068920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.068948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.448 [2024-10-08 21:05:01.069096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.448 [2024-10-08 21:05:01.069166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.448 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.069464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.069529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.069785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.069818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.069994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.070060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.070319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.070402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.070696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.070723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.070914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.070984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.071251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.071315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.071602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.071628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.071842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.071911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.072196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.072264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.072526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.072591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.072893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.072921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.073119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.073184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.073473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.073505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.073710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.073740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.073863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.073889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.074003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.074047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.074225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.074293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.074584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.074672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.074965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.075005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.075236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.075305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.075603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.075692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.076019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.076045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.076302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.076367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.076688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.076758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.077032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.077062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.077251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.077319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.077560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.077625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.077930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.077960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.078155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.078220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.078500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.078577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.078863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.078894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.079073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.079140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.079390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.079456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.079711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.079744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.079928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.079993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.080288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.080371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.080666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.080696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.080899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.080971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.449 [2024-10-08 21:05:01.081269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.449 [2024-10-08 21:05:01.081336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.449 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.081642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.081738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.081884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.081946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.082238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.082315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.082610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.082703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.082929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.083003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.083292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.083357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.083645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.083717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.083914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.083981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.084247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.084327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.084607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.084636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.084859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.084941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.085241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.085306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.085575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.085605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.085805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.085884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.086137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.086214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.086489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.086518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.086697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.086768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.087008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.087074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.087357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.087390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.087567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.087633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.087896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.087926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.088228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.088258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.088448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.088517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.088796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.088863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.089119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.089160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.089304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.089370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.089672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.089742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.090086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.090116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.090414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.090482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.090770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.090837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.091116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.091146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.091316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.091381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.091628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.091715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.091986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.092015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.092172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.092253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.092548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.092612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.092910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.092941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.093090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.093156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.093449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.093526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.093835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.450 [2024-10-08 21:05:01.093866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.450 qpair failed and we were unable to recover it. 00:37:32.450 [2024-10-08 21:05:01.094107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.094184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.094470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.094535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.094814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.094851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.095036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.095102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.095360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.095424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.095721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.095751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.095945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.096011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.096309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.096376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.096677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.096744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.096969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.097038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.097334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.097399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.097670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.097731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.097833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.097863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.098042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.098130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.098425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.098454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.098672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.098756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.099059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.099125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.099424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.099454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.099729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.099797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.100107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.100189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.100448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.100477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.100723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.100792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.101096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.101162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.101411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.101447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.101686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.101753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.102061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.102129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.102405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.102434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.102619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.102721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.102997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.103062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.103337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.103367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.103515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.103581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.103914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.103988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.104253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.104283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.104500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.104569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.104841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.104907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.105188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.105222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.105412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.105480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.105764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.105803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.106011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.106041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.106217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.451 [2024-10-08 21:05:01.106299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.451 qpair failed and we were unable to recover it. 00:37:32.451 [2024-10-08 21:05:01.106595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.106680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.106962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.106991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.107128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.107194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.107453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.107519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.107873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.107904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.108137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.108218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.108510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.108575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.108868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.108902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.109140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.109206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.109444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.109513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.109829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.109859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.110067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.110134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.110440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.110507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.110753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.110795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.110942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.111008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.111256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.111320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.111596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.111626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.111788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.111855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.112129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.112198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.112441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.112469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.112643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.112736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.113021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.113087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.113388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.113418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.113613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.113709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.113873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.113904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.114182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.114211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.114416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.114489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.114776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.114847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.115081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.115110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.115289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.115354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.115636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.115725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.115999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.116027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.116172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.116236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.116480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.116545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.116846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.116875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.452 [2024-10-08 21:05:01.117022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.452 [2024-10-08 21:05:01.117086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.452 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.117376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.117441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.117686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.117715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.117871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.117936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.118246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.118311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.118597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.118626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.118814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.118881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.119119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.119182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.119455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.119484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.119642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.119725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.119939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.120003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.120250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.120279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.120444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.120509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.120805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.120871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.121143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.121172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.121356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.121421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.121711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.121741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.121918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.121948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.122079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.122156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.122446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.122510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.122753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.122783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.122964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.123028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.123273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.123338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.123593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.123621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.123776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.123842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.124119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.124183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.124453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.124482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.124618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.124702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.124954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.125019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.125297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.125326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.125483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.125547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.125856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.125924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.126211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.126240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.126364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.126428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.126686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.126754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.127038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.127066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.127257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.127322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.127583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.127647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.127922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.127951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.128144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.128209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.453 [2024-10-08 21:05:01.128514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.453 [2024-10-08 21:05:01.128579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.453 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.128846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.128875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.129059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.129125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.129399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.129465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.129740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.129769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.129965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.130030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.130280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.130345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.130621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.130718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.130940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.131006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.131247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.131312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.131556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.131619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.131864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.131893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.132177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.132242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.132488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.132516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.132696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.132763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.133071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.133136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.133361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.133390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.133549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.133613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.133944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.134022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.134292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.134321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.134486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.134552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.134854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.134921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.135224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.135253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.135526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.135590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.135875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.135904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.136104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.136133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.136239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.136314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.136545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.136609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.136915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.136944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.137150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.137215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.137483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.137547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.137832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.137862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.138049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.138114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.138359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.138423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.138669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.138699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.138885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.138951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.139203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.139267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.139582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.139611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.139909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.139974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.140229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.140293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.140534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.454 [2024-10-08 21:05:01.140563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.454 qpair failed and we were unable to recover it. 00:37:32.454 [2024-10-08 21:05:01.140739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.140806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.141093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.141157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.141429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.141457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.141681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.141748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.141998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.142064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.142344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.142373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.142535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.142600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.142916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.142982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.143190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.143219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.143379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.143444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.143722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.143751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.143926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.143955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.144101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.144166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.144405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.144469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.144740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.144770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.144963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.145028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.145270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.145334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.145544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.145578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.145737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.145804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.146084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.146148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.146391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.146419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.146557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.146622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.146872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.146937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.147169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.147197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.147348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.147413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.147683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.147748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.148034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.148063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.148282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.148347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.148598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.148679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.148981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.149010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.149127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.149192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.149442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.149505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.149774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.149804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.149994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.150058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.150304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.150369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.150683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.150735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.150922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.150987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.151273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.151339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.151629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.151709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.151952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.455 [2024-10-08 21:05:01.152017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.455 qpair failed and we were unable to recover it. 00:37:32.455 [2024-10-08 21:05:01.152257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.152322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.152615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.152693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.152940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.153008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.153253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.153318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.153591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.153620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.153788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.153855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.154150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.154213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.154488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.154517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.154673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.154740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.154998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.155062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.155348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.155376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.155572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.155636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.155945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.156009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.156279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.156308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.156461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.156526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.156749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.156778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.156942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.156971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.157149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.157225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.157477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.157541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.157801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.157830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.158022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.158087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.158339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.158403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.158634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.158674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.158866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.158931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.159207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.159272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.159548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.159576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.159769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.159835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.160118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.160183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.160463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.160491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.160681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.160748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.160977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.161042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.161296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.161324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.161498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.161563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.161853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.161919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.162202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.162231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.162362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.162426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.162718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.162785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.163062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.163091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.163241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.163306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.163578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.163642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.456 qpair failed and we were unable to recover it. 00:37:32.456 [2024-10-08 21:05:01.163942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.456 [2024-10-08 21:05:01.163970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.164128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.164193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.164430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.164494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.164764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.164792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.164985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.165050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.165309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.165374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.165681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.165733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.165922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.165986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.166223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.166287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.166557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.166621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.166851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.166880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.167125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.167189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.167435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.167463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.167611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.167695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.167979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.168045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.168302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.168330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.168475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.168540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.168840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.168918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.169174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.169202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.169380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.169444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.169734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.169801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.170045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.170073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.170263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.170326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.170570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.170634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.170913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.170943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.171184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.171248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.171486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.171550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.171846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.171876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.172043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.172107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.172360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.172423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.172695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.172724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.172867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.172895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.173031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.173060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.173196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.173224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.173397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.173426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.457 qpair failed and we were unable to recover it. 00:37:32.457 [2024-10-08 21:05:01.173552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.457 [2024-10-08 21:05:01.173606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.173880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.173909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.174035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.174101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.174314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.174378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.174610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.174638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.174777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.174806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.174937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.174966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.175138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.175167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.175301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.175330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.175522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.175588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.175866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.175894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.176068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.176133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.176321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.176387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.176666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.176695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.458 [2024-10-08 21:05:01.176831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.458 [2024-10-08 21:05:01.176860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.458 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.177051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.177115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.177356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.177384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.177548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.177613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.177872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.177901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.178847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.178875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.179067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.179265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.179433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.179591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.179797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.179972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.180967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.180996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.181307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.181335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.181531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-10-08 21:05:01.181596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-10-08 21:05:01.181896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.181961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.182217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.182246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.182399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.182464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.182731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.182798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.183083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.183111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.183248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.183313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.183595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.183672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.183869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.183898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.184085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.184151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.184389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.184453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.184741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.184771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.184937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.185002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.185241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.185304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.185579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.185608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.185889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.185955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.186237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.186301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.186550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.186578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.186763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.186830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.187076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.187140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.187420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.187448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.187623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.187702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.187996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.188059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.188335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.188364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.188574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.188639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.188939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.189004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.189291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.189319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.189494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.189559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.189887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.189952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.190237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.190265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.190464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.190528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.190798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.190865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.191143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.191172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.191334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.191399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.191666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.191728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.191966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.192028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.192285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.192350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.192662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.192728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.193017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.193045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.193198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.193263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.193497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-10-08 21:05:01.193560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-10-08 21:05:01.193844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.193873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.194069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.194134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.194425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.194488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.194768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.194798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.194984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.195048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.195328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.195391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.195664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.195693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.195823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.195887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.196095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.196159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.196415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.196443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.196594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.196683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.196970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.197036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.197295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.197324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.197467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.197538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.197808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.197873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.198149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.198178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.198370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.198434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.198704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.198769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.199053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.199081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.199278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.199343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.199616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.199694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.199924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.199952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.200118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.200183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.200425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.200490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.200746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.200775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.200967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.201031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.201299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.201363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.201621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.201656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.201835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.201898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.202183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.202246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.202507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.202535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.202688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.202753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.203041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.203106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.203353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.203382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.203534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.203597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.203828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.203894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.204170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.204199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.204420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.204484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.204722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.204788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.205039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-10-08 21:05:01.205068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-10-08 21:05:01.205204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.205268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.205479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.205542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.205771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.205801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.205958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.206023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.206241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.206305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.206580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.206644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.206866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.206895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.207130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.207193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.207431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.207460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.207637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.207714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.207895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.207992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.208263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.208292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.208420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.208483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.208767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.208834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.209095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.209123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.209279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.209343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.209587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.209665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.209935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.209964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.210141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.210205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.210483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.210547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.210805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.210834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.211004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.211067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.211348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.211412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.211580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.211608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.211758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.211835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.212077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.212141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.212414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.212443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.212631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.212714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.212997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.213062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.213325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.213353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.213527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.213591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.213912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.213978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.214233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.214262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.214437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.214502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.214750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.214816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.215076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.215104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.215253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.215317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.215603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.215681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.215892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.215921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.216105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.216169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.736 qpair failed and we were unable to recover it. 00:37:32.736 [2024-10-08 21:05:01.216410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.736 [2024-10-08 21:05:01.216473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.216751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.216780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.216966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.217031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.217304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.217367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.217621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.217657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.217875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.217940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.218221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.218284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.218565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.218594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.218767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.218835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.219088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.219152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.219379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.219412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.219588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.219667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.219946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.220010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.220247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.220275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.220426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.220489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.220746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.220813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.221060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.221088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.221260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.221323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.221610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.221707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.221982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.222010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.222153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.222216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.222512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.222576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.222829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.222858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.222990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.223018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.223283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.223348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.223593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.223675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.223875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.223925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.224191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.224255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.224539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.224603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.224935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.225000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.225259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.225324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.225619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.225703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.225913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.225977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.737 [2024-10-08 21:05:01.226232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.737 [2024-10-08 21:05:01.226296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.737 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.226531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.226559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.226717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.226783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.227059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.227124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.227374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.227402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.227581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.227645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.227906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.227971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.228252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.228280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.228450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.228515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.228754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.228820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.229107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.229136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.229320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.229384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.229624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.229720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.229898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.229927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.230105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.230170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.230455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.230520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.230791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.230820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.231007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.231082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.231367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.231431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.231706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.231735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.231926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.231990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.232229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.232292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.232530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.232558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.232730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.232794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.233110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.233174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.233431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.233460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.233635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.233714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.234004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.234068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.234348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.234377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.234586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.234668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.234950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.235014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.235294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.235323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.235498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.235563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.235858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.235924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.236198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.236227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.236408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.236472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.236740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.236806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.237083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.237112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.237308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.237372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.237611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.237712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.237892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.738 [2024-10-08 21:05:01.237921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.738 qpair failed and we were unable to recover it. 00:37:32.738 [2024-10-08 21:05:01.238111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.238182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.238458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.238522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.238798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.238827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.239003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.239067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.239340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.239405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.239688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.239717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.239958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.240023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.240258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.240322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.240510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.240538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.240721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.240788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.241024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.241088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.241319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.241347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.241493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.241557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.241761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.241828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.242051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.242080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.242238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.242303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.242548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.242623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.242856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.242885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.243039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.243104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.243350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.243415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.243665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.243703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.243844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.243909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.244145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.244210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.244388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.244417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.244603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.244685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.244920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.244985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.245218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.245246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.245384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.245448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.245714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.245744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.245881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.245911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.246064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.246129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.246341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.246407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.246648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.246695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.246865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.246929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.247178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.247243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.247512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.247541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.247718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.247784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.248036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.248101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.248353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.248381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.248538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.739 [2024-10-08 21:05:01.248601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.739 qpair failed and we were unable to recover it. 00:37:32.739 [2024-10-08 21:05:01.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.248931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.249184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.249213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.249411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.249476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.249714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.249781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.250007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.250037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.250180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.250244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.250496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.250560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.250749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.250778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.250904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.250958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.251182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.251247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.251499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.251527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.251687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.251754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.251980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.252046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.252248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.252277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.252445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.252510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.252763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.252792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.252927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.252960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.253102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.253167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.253430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.253494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.253769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.253799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.253956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.254020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.254280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.254344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.254562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.254591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.254746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.254812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.255024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.255089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.255319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.255348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.255528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.255593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.255869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.255935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.256215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.256244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.256388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.256452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.256705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.256772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.257056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.257084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.257310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.257374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.257686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.257752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.257999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.258027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.258185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.258250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.258531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.258596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.258823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.258852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.259008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.259073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.259299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.740 [2024-10-08 21:05:01.259364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.740 qpair failed and we were unable to recover it. 00:37:32.740 [2024-10-08 21:05:01.259588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.259666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.259868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.259917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.260197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.260262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.260580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.260644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.260841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.260870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.261112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.261176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.261452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.261481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.261708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.261737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.261842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.261871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.262073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.262102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.262249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.262313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.262565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.262628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.262854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.262883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.263040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.263104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.263353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.263417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.263717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.263746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.263876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.263962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.264243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.264307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.264582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.264610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.264742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.264808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.265073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.265137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.265420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.265448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.265696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.265763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.266104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.266169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.266446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.266475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.266667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.266734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.266999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.267063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.267309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.267337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.267518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.267582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.267820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.267887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.268180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.268209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.268392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.268456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.268764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.268831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.269099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.269128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.269297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.269361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.269611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.269688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.269845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.269874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.270022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.270086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.270332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.270396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.741 qpair failed and we were unable to recover it. 00:37:32.741 [2024-10-08 21:05:01.270679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.741 [2024-10-08 21:05:01.270708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.270875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.270938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.271191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.271255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.271495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.271524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.271686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.271753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.271977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.272041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.272269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.272297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.272449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.272514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.272703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.272769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.272986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.273014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.273191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.273255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.273537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.273601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.273790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.273818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.273977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.274041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.274331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.274395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.274673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.274702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.274870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.274935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.275205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.275280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.275550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.275579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.275759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.275826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.276110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.276174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.276382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.276410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.276586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.276670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.276867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.276896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.277159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.277187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.277331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.277396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.277710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.277776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.278059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.278088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.278265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.278329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.278572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.278636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.278862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.278891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.279076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.279141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.279425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.279489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.279760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.279790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.279970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.280035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.280299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.280364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.742 qpair failed and we were unable to recover it. 00:37:32.742 [2024-10-08 21:05:01.280641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.742 [2024-10-08 21:05:01.280677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.280864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.280929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.281186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.281250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.281455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.281483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.281592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.281706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.281964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.282030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.282252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.282281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.282462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.282526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.282853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.282920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.283201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.283229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.283436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.283500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.283747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.283813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.284106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.284135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.284357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.284428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.284732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.284763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.284894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.284923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.285039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.285130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.285385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.285451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.285694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.285724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.285838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.285904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.286146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.286214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.286514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.286549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.286741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.286812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.287087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.287153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.287423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.287453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.287615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.287705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.288025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.288110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.288394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.288422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.288550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.288617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.288956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.289024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.289294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.289324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.289511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.289576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.289891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.289974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.290270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.290300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.290429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.290505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.290846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.290914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.291145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.291188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.291368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.291435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.291725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.291793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.292034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.292064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.743 [2024-10-08 21:05:01.292238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.743 [2024-10-08 21:05:01.292303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.743 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.292587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.292673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.292902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.292931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.293112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.293185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.293464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.293530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.293820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.293851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.294028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.294094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.294324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.294399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.294702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.294732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.294986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.295055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.295314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.295379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.295638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.295680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.295857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.295924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.296212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.296295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.296556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.296585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.296770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.296840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.297149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.297214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.297459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.297489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.297680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.297749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.298040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.298116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.298409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.298439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.298639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.298748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.299024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.299091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.299334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.299364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.299550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.299614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.299913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.299981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.300272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.300301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.300494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.300572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.300873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.300903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.301107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.301139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.301313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.301366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.301602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.301716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.302031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.302060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.302188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.302256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.302563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.302629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.302938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.302968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.303091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.303155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.303395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.303459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.303682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.303713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.303893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.303958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.744 [2024-10-08 21:05:01.304234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.744 [2024-10-08 21:05:01.304301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.744 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.304579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.304608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.304786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.304856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.305084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.305148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.305397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.305427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.305692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.305763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.306065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.306147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.306471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.306502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.306840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.306909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.307194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.307260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.307561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.307591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.307872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.307960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.308278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.308343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.308636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.308739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.309008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.309075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.309333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.309405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.309723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.309754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.310020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.310089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.310387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.310452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.310724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.310756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.310941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.311006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.311303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.311380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.311667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.311701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.311978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.312047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.312339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.312404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.312701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.312732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.312956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.313021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.313341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.313409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.313711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.313748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.314014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.314080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.314381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.314458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.314749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.314779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.314936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.315015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.315296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.315361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.315664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.315733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.315948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.316015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.316328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.316395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.316688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.316750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.316916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.316984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.317214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.317280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.745 [2024-10-08 21:05:01.317518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.745 [2024-10-08 21:05:01.317548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.745 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.317749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.317818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.318101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.318166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.318391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.318421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.318523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.318598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.318901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.318984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.319314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.319344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.319674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.319742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.320036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.320102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.320414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.320444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.320735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.320766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.320942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.321010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.321289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.321320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.321495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.321561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.321877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.321944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.322201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.322231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.322389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.322456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.322763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.322832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.323071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.323100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.323257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.323324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.323642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.323727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.324003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.324049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.324291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.324370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.324683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.324754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.325003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.325038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.325188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.325253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.325501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.325565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.325864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.325895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.326114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.326179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.326449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.326518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.326759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.326789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.326970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.327039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.327300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.327365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.327669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.327700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.328009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.328074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.746 [2024-10-08 21:05:01.328321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.746 [2024-10-08 21:05:01.328399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.746 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.328617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.328646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.328840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.328922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.329195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.329261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.329491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.329526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.329686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.329757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.330028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.330093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.330394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.330424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.330633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.330727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.330976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.331042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.331374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.331404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.331687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.331754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.332031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.332106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.332384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.332413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.332584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.332684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.332988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.333053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.333324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.333354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.333534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.333599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.333866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.333941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.334190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.334219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.334400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.334474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.334735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.334764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.334920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.334950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.335117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.335184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.335465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.335529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.335768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.335799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.335971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.336037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.336364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.336432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.336692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.336723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.336868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.336944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.337200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.337265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.337549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.337580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.337791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.337858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.338138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.338221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.338509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.338537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.338698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.338766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.339065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.339131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.339419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.339449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.339621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.339706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.339984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.340066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.340392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.747 [2024-10-08 21:05:01.340422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.747 qpair failed and we were unable to recover it. 00:37:32.747 [2024-10-08 21:05:01.340736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.340806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.341086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.341151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.341425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.341455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.341646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.341736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.342011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.342081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.342370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.342400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.342585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.342670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.342877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.342906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.343179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.343209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.343337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.343404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.343619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.343714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.343997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.344028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.344223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.344316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.344593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.344699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.344993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.345023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.345226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.345292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.345563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.345644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.345950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.345983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.346152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.346220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.346513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.346580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.346881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.346912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.347122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.347188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.347467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.347536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.347859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.347889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.348070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.348153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.348437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.348501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.348804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.348840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.349070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.349136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.349412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.349491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.349786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.349816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.349996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.350068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.350335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.350402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.350665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.350735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.350886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.350950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.351170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.351235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.351534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.351602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.351863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.351910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.352136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.352203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.352495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.352524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.352752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.748 [2024-10-08 21:05:01.352822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.748 qpair failed and we were unable to recover it. 00:37:32.748 [2024-10-08 21:05:01.353098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.353163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.353415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.353445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.353589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.353674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.353941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.354024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.354307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.354336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.354541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.354610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.354888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.354955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.355211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.355247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.355415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.355481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.355733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.355763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.355940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.355970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.356105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.356169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.356442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.356528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.356831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.356860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.357066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.357134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.357356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.357422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.357671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.357709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.357862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.357928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.358141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.358205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.358451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.358481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.358683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.358750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.359043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.359121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.359383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.359419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.359597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.359693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.359976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.360042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.360411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.360474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.360762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.360849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.361136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.361203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.361467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.361497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.361714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.361787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.362075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.362143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.362386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.362423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.362577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.362644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.362929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.362995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.363222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.363253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.363438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.363502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.363793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.363824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.363965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.363995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.364204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.364272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.364521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.364586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.749 qpair failed and we were unable to recover it. 00:37:32.749 [2024-10-08 21:05:01.364843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.749 [2024-10-08 21:05:01.364874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.365029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.365093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.365370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.365445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.365696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.365729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.365931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.365998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.366291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.366356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.366664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.366695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.366860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.366945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.367223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.367291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.367587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.367617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.367879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.367946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.368216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.368296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.368604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.368639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.368956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.369024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.369282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.369347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.369587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.369618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.369820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.369887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.370181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.370263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.370528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.370561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.370730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.370808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.374903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.375002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.375285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.375317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.375570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.375640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.375966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.376036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.376302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.376332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.376481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.376547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.376860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.376939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.377229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.377259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.377453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.377533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.377880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.377949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.378207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.378239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.378434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.378501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.378791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.378860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.379146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.379176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.379370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.379451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.750 [2024-10-08 21:05:01.379747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.750 [2024-10-08 21:05:01.379818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.750 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.380065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.380095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.380271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.380338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.380632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.380719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.380961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.380992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.381256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.381323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.381633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.381718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.382002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.382039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.382194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.382259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.382532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.382596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.382878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.382908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.383085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.383155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.383458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.383524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.383831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.383868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.384042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.384108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.384317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.384380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.384630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.384671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.384814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.384893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.385143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.385222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.385486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.385515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.385732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.385808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.386121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.386188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.386446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.386476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.386680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.386747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.387034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.387112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.387413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.387443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.387587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.387692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.388018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.388084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.388302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.388336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.388504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.388572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.388873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.388912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.389220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.389249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.389424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.389504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.389810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.389879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.390158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.390188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.390379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.390445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.390681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.390761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.391081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.391112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.391322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.391392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.391695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.391764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.751 [2024-10-08 21:05:01.392003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.751 [2024-10-08 21:05:01.392037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.751 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.392214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.392280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.392566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.392631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.392960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.392990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.393293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.393370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.393697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.393765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.394049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.394079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.394234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.394298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.394539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.394617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.394941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.394971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.395189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.395264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.395520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.395585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.395877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.395915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.396103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.396169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.396446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.396519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.396838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.396868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.397147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.397216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.397512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.397598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.397931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.397977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.398240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.398271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.398477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.398526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.398725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.398756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.398945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.398975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.399186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.399238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.399441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.399491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.399668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.399698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.399859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.399888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.400096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.400150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.400296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.400350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.400531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.400560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.400669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.400699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.400897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.400944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.401127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.401178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.401330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.401382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.401530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.401558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.401753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.401809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.402056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.402106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.402265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.402322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.402551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.402580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.402844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.402903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.752 [2024-10-08 21:05:01.403134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.752 [2024-10-08 21:05:01.403186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.752 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.403371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.403426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.403601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.403630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.403792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.403855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.404092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.404142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.404345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.404398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.404627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.404662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.404907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.404957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.405171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.405223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.405407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.405466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.405637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.405676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.405877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.405907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.406120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.406170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.406347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.406400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.406580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.406608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.406785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.406836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.407021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.407072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.407201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.407258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.407413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.407442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.407595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.407623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.407874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.407926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.408114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.408163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.408349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.408399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.408548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.408577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.408727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.408790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.408963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.409014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.409218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.409264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.409430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.409460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.409700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.409729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.409885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.409946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.410209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.410260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.410403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.410454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.410658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.410688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.410912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.410942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.411145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.411194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.411432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.411481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.411693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.411723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.411892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.411946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.412168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.412215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.412359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.412410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.412573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.753 [2024-10-08 21:05:01.412601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.753 qpair failed and we were unable to recover it. 00:37:32.753 [2024-10-08 21:05:01.412787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.412856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.412981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.413051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.413262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.413313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.413523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.413552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.413691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.413721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.413875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.413938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.414135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.414185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.414381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.414433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.414593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.414622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.414871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.414923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.415107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.415156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.415318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.415368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.415528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.415561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.415777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.415829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.416059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.416114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.416278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.416327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.416424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.416458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.416620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.416659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.416902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.416969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.417164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.417213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.417460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.417508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.417761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.417810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.417953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.418183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.418327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.418480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.418598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.418817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.418869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.419012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.419061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.419221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.419259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.419436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.419465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.419620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.419648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.419879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.419938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.420137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.420186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.420453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.420503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.420706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.420770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.420947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.421001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.421213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.421265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.754 qpair failed and we were unable to recover it. 00:37:32.754 [2024-10-08 21:05:01.421480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.754 [2024-10-08 21:05:01.421509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.421730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.421789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.422036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.422086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.422331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.422382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.422613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.422641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.422880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.422943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.423160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.423208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.423360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.423411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.423554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.423583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.423785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.423837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.424086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.424141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.424349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.424398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.424599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.424628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.424820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.424871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.425079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.425129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.425349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.425397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.425547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.425575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.425770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.425821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.426061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.426114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.426304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.426355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.426565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.426594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.426746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.426797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.427033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.427080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.427280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.427331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.427508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.427536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.427637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.427673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.427891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.427942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.428098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.428148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.428331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.428382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.428580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.428609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.428828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.428877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.429051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.429100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.429319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.429370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.429575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.429603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.429861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.429911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.430162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.430211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.430350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.430398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.755 [2024-10-08 21:05:01.430529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.755 [2024-10-08 21:05:01.430557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.755 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.430735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.430787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.430999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.431051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.431209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.431258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.431500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.431529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.431646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.431682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.431835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.431863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.432103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.432132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.432292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.432343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.432499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.432531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.432695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.432726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.432889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.432918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.433071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.433100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.433333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.433383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.433500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.433528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.433796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.433845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.434041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.434092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.434282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.434329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.434487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.434520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.434747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.434799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.435021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.435049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.435231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.435286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.435492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.435521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.435760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.435811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.436012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.436041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.436214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.436263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.436383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.436412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.436565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.436593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.436766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.436796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.437040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.437068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.437199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.437228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.437414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.437443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.437685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.437715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.437969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.438019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.438224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.438275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.438508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.438561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.438728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.438758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.438942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.438992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.439177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.439226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.439374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.439426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.439561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.756 [2024-10-08 21:05:01.439590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.756 qpair failed and we were unable to recover it. 00:37:32.756 [2024-10-08 21:05:01.439749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.439800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.440028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.440079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.440282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.440330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.440511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.440540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.440732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.440783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.440977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.441027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.441221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.441272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.441468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.441497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.441733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.441783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.442023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.442071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.442261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.442307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.442537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.442565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.442727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.442779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.442999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.443047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.443281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.443329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.443497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.443526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.443668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.443697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.443935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.443984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.444215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.444264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.444496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.444524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.444720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.444780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.444927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.444980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.445229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.445278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.445418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.445447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.445679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.445709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.445905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.445952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.446130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.446180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.446402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.446452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.446671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.446702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.446908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.446956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.447192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.447242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.447484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.447534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.447795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.447845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.448102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.448160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.448382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.448432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.448551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.448579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.448769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.448823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.449078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.449128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.449306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.449357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.449492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.757 [2024-10-08 21:05:01.449525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.757 qpair failed and we were unable to recover it. 00:37:32.757 [2024-10-08 21:05:01.449705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.449767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.449989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.450234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.450383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.450536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.450732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.450947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.450975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.451178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.451227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.451406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.451435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.451632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.451668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.451813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.451862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.452085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.452134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.452361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.452412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.452611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.452640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.452860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.452920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.453156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.453204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.453414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.453463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.453561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.453589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.453739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.453799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.453930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.453986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.454149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.454201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.454366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.454416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.454615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.454644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.454909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.454961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.455172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.455222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.455418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.455465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.455665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.455695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.455893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.455922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.456109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.456157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.456345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.456392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.456517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.456546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.456680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.456709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.456908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.456961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.457105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.457156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.457340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.457391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.457527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.457555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.457716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.457746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.457984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.458013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.458264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.458313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.458508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.758 [2024-10-08 21:05:01.458537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.758 qpair failed and we were unable to recover it. 00:37:32.758 [2024-10-08 21:05:01.458672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.458704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.458917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.458965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.459217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.459265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.459437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.459466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.459665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.459694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.459831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.459881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.460026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.460076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.460213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.460268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.460490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.460518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.460682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.460712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.460969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.461024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.461178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.461227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.461318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.461346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.461536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.461565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.461740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.461792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.462028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.462078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.462267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.462317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.462492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.462520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.462659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.462688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.462889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.462942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.463164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.463219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.463465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.463516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.463673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.463702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.463905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.463951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.464174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.464221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.464419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.464447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.464605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.464634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.464895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.464949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.465133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.465181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.465432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.465477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.465609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.465637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.465823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.465874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.466039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.466088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.466248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.466300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.466444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.466482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.466681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.759 [2024-10-08 21:05:01.466711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.759 qpair failed and we were unable to recover it. 00:37:32.759 [2024-10-08 21:05:01.466883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.466933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.467078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.467126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.467319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.467347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.467510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.467539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.467669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.467703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.467829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.467882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.468030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.468077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.468253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.468302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.468460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.468489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.468644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.468680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.468882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.468941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.469188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.469236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.469498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.469552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.469819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.469881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.470121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.470172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.470420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.470471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.470647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.470684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.470847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.470876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.471075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.471126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.471335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.471384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.471520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.471548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.471794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.471843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.471955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.472010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.472191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.472243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.472505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.472562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.472808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.472858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.473035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.473086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.473258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.473307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.473494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.473522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.473707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.473769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.473956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.474007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.474257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.474307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.474444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.474478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.474676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.474718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.474979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.475022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.475283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.475334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.475515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.475545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.475749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.475800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.476002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.476054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.476229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.476281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.760 qpair failed and we were unable to recover it. 00:37:32.760 [2024-10-08 21:05:01.476382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.760 [2024-10-08 21:05:01.476415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.476663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.476694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.476849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.476905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.477154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.477212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.477426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.477477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.477719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.477750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.477987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.478037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.478240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.478291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.478492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.478543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.478776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.478809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.478966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.479030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:32.761 [2024-10-08 21:05:01.479322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.761 [2024-10-08 21:05:01.479366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:32.761 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.479568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.479597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.479771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.479819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.480004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.480057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.480260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.480308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.480481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.480510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.480696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.480755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.480942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.480989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.481189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.481237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.481379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.481409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.481544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.481575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.481755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.481805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.482046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.482094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.482228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.482276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.482420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.482451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.482645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.482685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.482941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.482992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.483175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.483225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.483425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.483476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.483668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.483704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-10-08 21:05:01.483846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-10-08 21:05:01.483901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.484096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.484145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.484330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.484382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.484576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.484605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.484859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.484910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.485085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.485133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.485352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.485404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.485641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.485686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.485926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.485955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.486099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.486145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.486383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.486430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.486667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.486696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.486858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.486887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.487065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.487094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.487263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.487314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.487491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.487542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.487728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.487758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.487925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.487953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.488088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.488117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.488288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.488317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.488455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.488488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.488729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.488759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.489031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.489208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.489401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.489569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.489802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.489997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.490045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.490218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.490267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.490501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.490530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.490719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.490775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.490960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.491011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.491133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.491189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.491337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.491388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.491618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.491647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.491904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.491952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.492210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.492263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.492504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.492553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.492796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.492846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.493052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.493102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-10-08 21:05:01.493285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-10-08 21:05:01.493332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.493508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.493536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.493678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.493707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.493903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.493932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.494055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.494109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.494300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.494352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.494511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.494540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.494734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.494787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.494925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.494954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.495133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.495162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.495332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.495360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.495581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.495609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.495808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.495858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.496009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.496058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.496266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.496317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.496460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.496488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.496623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.496660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.496794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.496849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.497042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.497090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.497292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.497343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.497536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.497569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.497814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.497865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.498048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.498097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.498279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.498330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.498523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.498552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.498807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.498859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.499009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.499058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.499266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.499318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.499491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.499520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.499673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.499703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.499886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.499937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.500085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.500135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.500322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.500368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.500535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.500563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.500774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.500825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.501031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.501080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.501256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.501306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.501423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.501452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.501570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.501599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.501786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.501838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-10-08 21:05:01.502095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-10-08 21:05:01.502147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.502299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.502349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.502552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.502581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.502675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.502705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.502887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.502941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.503147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.503199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.503420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.503449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.503642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.503679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.503876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.503930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.504094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.504145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.504322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.504373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.504534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.504567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.504751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.504803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.505056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.505117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.505315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.505367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.505567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.505596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.505824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.505872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.506016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.506063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.506315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.506365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.506453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.506481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.506610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.506643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.506874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.506935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.507134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.507184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.507325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.507376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.507474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.507512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.507666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.507695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.507890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.507940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.508107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.508157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.508338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.508385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.508612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.508641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.508840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.508891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.509137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.509189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.509348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.509399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.509545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.509579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.509827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.509875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.510076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.510305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.510472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.510634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.510842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-10-08 21:05:01.510984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-10-08 21:05:01.511037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.511177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.511230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.511402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.511430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.511565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.511593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.511799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.511828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.511962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.511991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.512147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.512175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.512301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.512329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.512487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.512525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.512673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.512703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.512893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.512952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.513146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.513196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.513392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.513420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.513545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.513573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.513728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.513783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.513966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.514014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.514162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.514211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.514430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.514458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.514620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.514656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.514879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.514944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.515136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.515191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.515435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.515485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.515713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.515744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.515897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.515946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.516105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.516154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.516274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.516325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.516464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.516493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.516635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.516724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.516915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.516944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.517133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.517162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.517346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.517374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.517549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.517578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.517743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.517794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.518029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.518077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-10-08 21:05:01.518303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-10-08 21:05:01.518355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.518557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.518586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.518840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.518889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.519088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.519140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.519319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.519371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.519617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.519646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.519896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.519957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.520101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.520149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.520392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.520443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.520620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.520648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.520848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.520876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.521096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.521148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.521356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.521406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.521567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.521596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.521764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.521794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.521990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.522040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.522241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.522293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.522448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.522477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.522614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.522643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.522794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.522843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.523055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.523105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.523297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.523348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.523503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.523532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.523731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.523761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.523916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.523945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.524112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.524141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.524311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.524348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.524537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.524572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.524803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.524835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.525083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.525140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.525387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.525441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.525715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.525770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.526012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.526064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.526261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.526314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.526489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.526517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.526665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.526695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.526958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.527013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.527209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.527259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.527459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.527508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.527706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.527771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-10-08 21:05:01.528024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-10-08 21:05:01.528085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.528261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.528289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.528468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.528496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.528734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.528764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.528942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.529003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.529261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.529310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.529410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.529437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.529565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.529594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.529850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.529902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.530047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.530104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.530259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.530309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.530547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.530576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.530772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.530824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.531014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.531064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.531282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.531335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.531447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.531476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.531637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.531673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.531875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.531937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.532095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.532145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.532345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.532395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.532504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.532533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.532718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.532777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.533081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.533254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.533452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.533646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.533856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.533996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.534164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.534368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.534578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.534778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.534945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.534974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.535112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.535140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.535317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.535346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.535569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.535598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.535845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.535898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.536046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.536100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.536335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.536393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.536623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.536660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-10-08 21:05:01.536820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-10-08 21:05:01.536871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.537036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.537088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.537325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.537376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.537573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.537602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.537858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.537911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.538097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.538149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.538371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.538423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.538660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.538690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.538920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.538949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.539187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.539239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.539457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.539502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.539674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.539735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.539918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.539970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.540238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.540300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.540516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.540568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.540700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.540730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.540879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.540931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.541157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.541209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.541360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.541411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.541589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.541618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.541861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.541920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.542140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.542193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.542346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.542397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.542565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.542594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.542750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.542804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.542930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.542989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.543181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.543214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.543408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.543458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.543687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.543717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.543901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.543953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.544137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.544189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.544345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.544397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.544488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.544516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.544715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.544777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.545000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.545053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.545269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.545318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.545548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.545577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.545713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.545775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.545937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.545989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.546173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.546229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-10-08 21:05:01.546446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-10-08 21:05:01.546476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.546579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.546607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.546865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.546919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.547119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.547172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.547434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.547487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.547632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.547668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.547844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.547898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.548128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.548180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.548322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.548378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.548606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.548635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.548847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.548898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.549071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.549123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.549349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.549398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.549585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.549614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.549816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.549846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.550014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.550066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.550289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.550342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.550527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.550556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.550677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.550707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.550842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.550904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.551110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.551160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.551315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.551366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.551524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.551553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.551738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.551791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.552043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.552314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.552479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.552618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.552793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.552998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.553027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.553235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.553285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.553435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.553474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.553635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.553670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.553821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.553877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.554041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.554070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.554196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.554225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.554395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.554423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.554551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.554580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.554761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.554814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.555052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.555114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.555327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-10-08 21:05:01.555380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-10-08 21:05:01.555487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.555515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.555670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.555700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.555856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.555919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.556049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.556120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.556313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.556342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.556579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.556608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.556873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.556939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.557195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.557250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.557502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.557552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.557758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.557810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.558033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.558084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.558332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.558381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.558615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.558644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.558816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.558869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.558998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.559052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.559269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.559322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.559454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.559483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.559681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.559710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.559967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.560019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.560162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.560224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.560443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.560494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.560683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.560741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.560966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.561018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.561260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.561312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.561460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.561489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.561684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.561719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.561976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.562034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.562153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.562210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.562408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.562455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.562612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.562640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.562848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.562900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.563103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.563154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.563307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.563354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.563518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.563547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.563678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.045 [2024-10-08 21:05:01.563708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.045 qpair failed and we were unable to recover it. 00:37:33.045 [2024-10-08 21:05:01.563878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.563933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.564158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.564218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.564419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.564469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.564671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.564701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.564821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.564881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.565052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.565097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.565299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.565350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.565486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.565525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.565782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.565835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.566089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.566147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.566359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.566412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.566612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.566640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.566819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.566865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.567083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.567136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.567283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.567338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.567474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.567503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.567708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.567761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.568003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.568051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.568263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.568315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.568495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.568524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.568662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.568691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.568874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.568936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.569200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.569299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.569636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.569722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.569931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.569997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.570326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.570395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.570702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.570733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.570992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.571061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.571309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.571378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.571595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.571694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.571863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.571904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.572211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.572277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.572568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.572671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.572880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.572948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.573216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.573298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.573584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.573669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.573935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.574002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.574291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.574356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.574610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.574705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.574881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.046 [2024-10-08 21:05:01.574910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.046 qpair failed and we were unable to recover it. 00:37:33.046 [2024-10-08 21:05:01.575149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.575179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.575333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.575399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.575709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.575741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.575921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.575951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.576294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.576363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.576646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.576723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.576860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.576892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.577058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.577124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.577395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.577460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.577739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.577769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.577929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.578002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.578300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.578368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.578672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.578721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.578875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.578939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.579181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.579247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.579554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.579623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.579937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.580006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.580297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.580364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.580672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.580733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.580858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.580887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.581146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.581225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.581525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.581590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.581869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.581938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.582243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.582309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.582601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.582720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.582928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.582998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.583255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.583323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.583592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.583620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.583799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.583882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.584171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.584238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.584484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.584519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.584697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.584765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.585015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.585094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.585351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.585381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.585559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.585629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.585962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.586029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.586309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.586345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.586558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.586624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.586942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.587013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.587301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.587330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-10-08 21:05:01.587472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-10-08 21:05:01.587549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.587859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.587927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.588217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.588247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.588512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.588577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.588869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.588951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.589270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.589299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.589593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.589681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.589970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.590036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.590313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.590353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.590608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.590702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.590910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.590978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.591241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.591272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.591560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.591627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.591938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.592008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.592285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.592323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.592545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.592612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.592938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.593005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.593285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.593316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.593464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.593529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.593782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.593849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.594145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.594174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.594322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.594388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.594707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.594776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.595050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.595079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.595252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.595319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.595569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.595634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.595899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.595929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.596118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.596182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.596485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.596553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.596842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.596872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.597091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.597159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.597454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.597519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.597779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.597810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.597989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.598054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.598335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.598415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.598659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.598689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.598872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.598940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.599244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.599310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.599553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.599589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.599749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.599817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-10-08 21:05:01.600105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-10-08 21:05:01.600173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.600459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.600490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.600684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.600752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.601058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.601125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.601414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.601445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.601648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.601737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.602025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.602090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.602365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.602395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.602553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.602619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.602963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.603032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.603267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.603296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.603436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.603504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.603751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.603820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.604104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.604134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.604291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.604357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.604611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.604700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.605007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.605036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.605235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.605326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.605590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.605671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.605867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.605904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.606092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.606156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.606426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.606491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.606776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.606807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.607006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.607070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.607351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.607418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.607668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.607700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.607867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.607935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.608217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.608283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.608550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.608580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.608776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.608843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.609118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.609200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.609544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.609576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.609902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.609968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.610238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.610304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.610546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.610577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.610777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.610845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-10-08 21:05:01.611131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-10-08 21:05:01.611199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.611472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.611500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.611697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.611768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.612051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.612116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.612381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.612411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.612585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.612667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.612987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.613068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.613348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.613377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.613503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.613577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.613868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.613898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.614030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.614060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.614248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.614313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.614593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.614683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.615011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.615041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.615354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.615421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.615711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.615778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.616029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.616059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.616242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.616306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.616529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.616607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.616905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.616935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.617115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.617183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.617462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.617537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.617827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.617858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.618043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.618109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.618386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.618451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.618765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.618796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.619018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.619085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.619395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.619461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.619709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.619748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.619938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.620002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.620249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.620322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.620655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.620703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.620991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.621074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.621369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.621434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.621732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.621768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.621957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.622024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.622281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.622349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.622684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.622748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.622971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.623039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.623352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.623419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-10-08 21:05:01.623738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-10-08 21:05:01.623772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.623972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.624037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.624296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.624366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.624663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.624694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.624867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.624932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.625193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.625261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.625500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.625529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.625680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.625749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.626045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.626101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.626369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.626399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.626598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.626701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.627001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.627070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.627330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.627358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.627539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.627607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.627875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.627904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.628140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.628171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.628359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.628423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.628710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.628779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.629060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.629089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.629301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.629370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.629672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.629739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.630015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.630051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.630196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.630262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.630531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.630600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.630924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.630953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.631118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.631194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.631447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.631512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.631727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.631763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.631957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.632025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.632320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.632385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.632621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.632661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.632829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.632896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.633134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.633215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.633493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.633522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.633698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.633767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.634039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.634104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.634368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.634399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.634590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.634673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.635007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.635087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.635370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.635401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-10-08 21:05:01.635550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-10-08 21:05:01.635618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.635882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.635942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.636151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.636181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.636348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.636414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.636637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.636741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.637041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.637071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.637296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.637371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.637677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.637741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.638010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.638041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.638237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.638303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.638576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.638669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.638960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.638989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.639165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.639246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.639612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.639700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.639987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.640017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.640188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.640253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.640527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.640592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.640856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.640885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.641011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.641076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.641333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.641398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.641663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.641693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.641874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.641951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.642251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.642315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.642538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.642574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.642718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.642786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.643069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.643134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.643434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.643463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.643736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.643766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.643901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.643972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.644238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.644267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.644420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.644484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.644745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.644812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.644993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.645021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.645170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.645249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.645516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.645580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.645884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.645914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.646049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.646113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.646392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.646457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.646727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.646757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.646936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.647001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.647293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-10-08 21:05:01.647358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-10-08 21:05:01.647667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.647697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.647914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.647978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.648256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.648321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.648607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.648636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.648846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.648913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.649163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.649228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.649490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.649519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.649681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.649748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.650030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.650094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.650352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.650380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.650533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.650598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.650972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.651074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.651377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.651408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.651559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.651624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.651959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.652026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.652350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.652379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.652685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.652752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.652997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.653063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.653369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.653398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.653683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.653748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.654018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.654098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.654365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.654394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.654533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.654598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.654886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.654952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.655147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.655176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.655374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.655447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.655702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.655769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.656019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.656049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.656247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.656312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.656546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.656611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.656936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.656965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.657194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.657259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.657550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.657615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.657922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.657952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-10-08 21:05:01.658146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-10-08 21:05:01.658212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.658461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.658527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.658820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.658850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.659054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.659119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.659357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.659422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.659685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.659735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.659908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.659974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.660223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.660287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.660519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.660583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.660783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.660816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.660996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.661071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.661368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.661397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.661578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.661669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.661922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.661988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.662262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.662290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.662454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.662519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.662774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.662842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.663132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.663160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.663356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.663420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.663694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.663765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.664065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.664095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.664276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.664340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.664566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.664630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.664882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.664911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.665025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.665104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.665319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.665383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.665610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.665645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.665777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.665842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.666134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.666198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.666501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.666529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.666788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.666854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.667131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.667196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.667449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.667478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.667684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.667750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.668050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.668114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.668322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.668351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.668599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.668682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.668834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.668863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.668971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.669012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.669109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.669163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.669481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-10-08 21:05:01.669547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-10-08 21:05:01.669805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.669835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.669979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.670052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.670310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.670374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.670680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.670715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.670956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.671021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.671288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.671354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.671632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.671668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.671929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.671993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.672231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.672297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.672550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.672578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.672764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.672830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.673140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.673205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.673533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.673563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.673879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.673946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.674245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.674311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.674621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.674659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.674923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.674989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.675302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.675368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.675675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.675705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.675909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.675974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.676255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.676320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.676533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.676598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.676816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.676845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.676978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.677041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.677273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.677302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.677446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.677528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.677718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.677784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.678003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.678032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.678139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.678199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.678435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.678499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.678734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.678763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.679015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.679080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.679355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.679419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.679693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.679723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.679886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.679951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.680246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.680310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.680583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.680612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.680764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.680830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.681092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.681156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.681398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.681427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-10-08 21:05:01.681628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-10-08 21:05:01.681718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.681940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.682005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.682247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.682276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.682425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.682490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.682715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.682781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.682970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.682999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.683114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.683178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.683455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.683519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.683744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.683774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.683865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.683926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.684196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.684261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.684495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.684560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.684759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.684788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.684967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.685038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.685235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.685265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.685440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.685505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.685741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.685808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.686102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.686131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.686395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.686459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.686733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.686799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.687052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.687081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.687235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.687299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.687585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.687666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.687862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.687892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.688030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.688102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.688348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.688423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.688648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.688685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.688810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.688876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.689079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.689145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.689356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.689385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.689520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.689585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.689812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.689878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.690112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.690141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.690364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.690429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.690727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.690795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.691012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.691041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.691172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.691237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.691523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.691588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.691778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.691808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.691945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.692000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-10-08 21:05:01.692175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-10-08 21:05:01.692239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.692452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.692516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.692743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.692773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.692924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.692988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.693273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.693302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.693511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.693576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.693816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.693881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.694157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.694186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.694298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.694363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.694566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.694630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.694885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.694914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.695090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.695155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.695394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.695459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.695670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.695699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.695809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.695878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.696075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.696140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.696341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.696371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.696500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.696580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.696779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.696844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.697054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.697083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.697238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.697304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.697543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.697607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.697900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.697929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.698102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.698167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.698369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.698434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.698643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.698690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.698817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.698887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.699120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.699184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.699382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.699411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.699543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.699572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.699716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.699782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.699992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.700021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.700131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.700195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.700398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.700462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.700677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.700744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.700946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.700974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.701127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.701192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.701400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.701464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.701635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.701712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.701927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.701956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.702056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-10-08 21:05:01.702106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-10-08 21:05:01.702265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.702330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.702568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.702632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.702857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.702885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.703028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.703092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.703295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.703360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.703555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.703621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.703832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.703861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.704042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.704107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.704344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.704409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.704613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.704695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.704882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.704910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.705122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.705225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.705455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.705523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.705754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.705823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.706059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.706088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.706215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.706278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.706477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.706541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.706617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d35f0 (9): Bad file descriptor 00:37:33.058 [2024-10-08 21:05:01.706863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.706907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.707829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.707860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.708876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.708905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.709081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.709144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.709372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.709435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.709686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.709739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.709841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.709869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.710107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.710170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-10-08 21:05:01.710473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-10-08 21:05:01.710536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.710762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.710791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.710934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.710998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.711250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.711313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.711539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.711601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.711776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.711804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.711914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.711942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.712095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.712158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.712370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.712433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.712615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.712647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.712761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.712789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.713002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.713065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.713331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.713393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.713589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.713669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.713829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.713857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.713961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.713988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.714179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.714253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.714469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.714533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.714746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.714774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.714930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.714993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.715205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.715269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.715549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.715613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.715813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.715841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.716008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.716070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.716347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.716411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.716691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.716746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.716913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.716977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.717261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.717290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.717454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.717517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.717765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.717794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.717917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.717945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.718104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.718166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.718338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.718401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.718628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.718662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.718775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.718802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.718971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.719034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.719287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.719351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.719562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.719625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.719850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.719879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.720261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.720325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.720558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-10-08 21:05:01.720621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-10-08 21:05:01.720794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.720823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.720972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.721000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.721147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.721221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.721509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.721572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.721747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.721776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.721990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.722054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.722315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.722377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.722604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.722681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.722819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.722847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.722998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.723062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.723311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.723379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.723579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.723642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.723807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.723835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.723961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.723989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.724083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.724154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.724458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.724522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.724744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.724773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.724878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.724936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.725174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.725237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.725502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.725564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.725792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.725821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.725959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.726023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.726254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.726282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.726468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.726531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.726729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.726794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.727043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.727071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.727211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.727274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.727522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.727585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.727788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.727817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.727989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.728063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.728311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.728374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.728584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.728612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.728725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.728804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.729005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.729069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.729351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.729383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.729585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.729649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.729850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.729913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.730214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.730242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.730414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.730477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.730758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.730824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.731042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-10-08 21:05:01.731070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-10-08 21:05:01.731245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.731317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.731574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.731638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.731844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.731872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.732070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.732133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.732382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.732446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.732637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.732681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.732790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.732818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.732966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.733028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.733280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.733309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.733429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.733490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.733732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.733800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.734849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.734981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.735009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.735160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.735224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.735459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.735522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.735727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.735757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.735859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.735887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.736075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.736138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.736426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.736489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.736716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.736745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.736872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.736939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.737115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.737177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.737362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.737425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.737667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.737731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.737847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.737876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.738032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.738096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.738333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.738396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.738627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.738659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.738771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.738799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.738907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.738970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.739205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.739234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.739370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.739434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.739670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.739741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.739851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.739880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.740024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.740089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-10-08 21:05:01.740306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-10-08 21:05:01.740370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.740629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.740725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.740855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.740884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.741090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.741154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.741398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.741461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.741683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.741732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.741842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.741871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.742026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.742055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.742142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.742203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.742425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.742488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.742697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.742726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.742826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.742854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.743019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.743084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.743326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.743390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.743586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.743666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.743799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.743827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.744002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.744043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.744245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.744308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.744497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.744569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.744773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.744802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.744909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.744969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.745173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.745236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.745466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.745529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.745728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.745758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.745871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.745899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.746082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.746110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.746298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.746363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.746574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.746636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.746801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.746829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.747009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.747073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.747284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.747347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.747511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.747593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.747787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.747816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.747957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.748019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.748230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.748292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.748490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.748554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.748753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-10-08 21:05:01.748781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-10-08 21:05:01.748885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.748913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.749009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.749055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.749236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.749298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.749542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.749605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.749772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.749801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.749952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.750016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.750235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.750309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.750546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.750609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.750758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.750787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.750889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.750917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.751090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.751152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.751352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.751415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.751611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.751639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.751758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.751786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.751925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.751989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.752199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.752227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.752387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.752449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.752689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.752742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.752881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.752910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.753035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.753098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.753320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.753384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.753582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.753611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.753731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.753760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.753868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.753925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.754129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.754157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.754327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.754390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.754590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.754671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.754811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.754839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.754937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.754988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.755215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.755279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.755517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.755580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.755775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.755804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.755930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.755994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.756241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.756314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.756524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.756588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.756770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.756799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.756937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.756965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.757110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.757173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.757374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.757439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.757641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.757678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-10-08 21:05:01.757792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-10-08 21:05:01.757820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.757961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.758025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.758274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.758337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.758565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.758629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.758787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.758816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.758916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.758944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.759121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.759184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.759371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.759435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.759671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.759727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.759828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.759856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.760020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.760083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.760329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.760392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.760564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.760627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.760799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.760828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.760998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.761026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.761144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.761206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.761433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.761497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.761693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.761722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.761826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.761855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.762017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.762081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.762308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.762370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.762584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.762647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.762792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.762821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.762945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.762973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.763154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.763218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.763425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.763489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.763698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.763727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.763824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.763852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.764060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.764124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.764326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.764389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.764620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.764707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.764817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.764845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.764974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.765003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.765148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.765212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.765411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.765485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.765692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.765722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.765826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.765855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.766015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.766078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.766283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.766346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.766529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.766593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.766767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.766795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-10-08 21:05:01.766895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-10-08 21:05:01.766924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.767083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.767147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.767346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.767409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.767611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.767639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.767756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.767784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.767907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.767971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.768166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.768230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.768467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.768531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.768726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.768755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.768861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.768890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.771058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.771138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.771396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.771465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.771694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.771724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.771855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.771919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.772150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.772214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.772418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.772447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.772598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.772676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.772864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.772926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.773128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.773156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.773309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.773373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.773592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.773681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.773867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.773895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.774073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.774137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.774347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.774411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.774610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.774639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.774755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.774825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.775035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.775098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.775299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.775327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.775500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.775565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.775757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.775820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.776041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.776069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.776173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.776245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.776445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.776507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.776701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.776730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.776845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.776873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.777098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.777162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.777393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.777426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.777550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.777581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.777704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.777734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.777840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.777869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.778038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-10-08 21:05:01.778076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-10-08 21:05:01.778285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.778352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.778591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.778620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.778760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.778789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.778911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.778975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.779203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.779243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.779401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.779439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.779614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.779648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.779764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.779793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.779947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.779978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.780121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.780151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.780350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.780378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.780492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.780556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.780762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.780827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.781080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.781111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.781230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.781270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-10-08 21:05:01.781444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-10-08 21:05:01.781474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.781632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.781670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.781793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.781827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.781938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.781973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.782111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.782138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.782318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.782365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.782559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.782606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.782771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.782810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.782992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.783029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.783186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.783223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.783409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.783447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-10-08 21:05:01.783598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-10-08 21:05:01.783634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.783774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.783813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.783985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.784178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.784353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.784550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.784724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.784860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.784896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.785903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.785932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.786821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.786986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.787932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.787961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.788158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.788343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.788528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.788688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.788815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.788958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.789942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.789970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-10-08 21:05:01.790068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-10-08 21:05:01.790096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.790961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.790989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.791874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.791902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.792836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.792993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.793921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.793954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.794122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.794152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.794315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.794345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.794480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.794510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.794627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.794674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.794820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.794885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.795091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.795159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.795365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.795431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.795631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.795668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.795767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.795801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.795948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.795994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.796170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.796229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.796340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.796404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.796509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.796538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.796641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-10-08 21:05:01.796677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-10-08 21:05:01.796802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.796862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.796988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.797896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.797924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.798883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.798912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.799918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.799947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.800922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.800950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.801939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.801968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.802946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.802975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.803066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.803095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-10-08 21:05:01.803263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-10-08 21:05:01.803292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.803394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.803422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.803643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.803679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.803813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.803865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.803997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.804213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.804401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.804532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.804754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.804896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.804925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.805831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.805860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.806839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.806892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.807826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.807855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.808016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.808064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.808244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.808275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.808455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.808486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.808631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.808670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.808783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.808822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.809009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.809052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.809207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.809265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.809454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.809501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.809641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.809688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.809813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.809867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.810028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-10-08 21:05:01.810091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-10-08 21:05:01.810275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.810303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.810513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.810541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.810662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.810703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.810815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.810844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.810979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.811877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.811906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.812878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.812906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.813855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.813996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.814155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.814342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.814508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.814709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.814869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.814897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.815853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-10-08 21:05:01.815881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-10-08 21:05:01.816036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.816072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.816202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.816231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.816349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.816393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.816568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.816598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.816751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.816783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.816969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.817033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.817261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.817333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.817589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.817674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.817881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.817937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.818145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.818350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.818514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.818675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.818815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.818976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.819935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.819963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.820930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.820959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.821141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.821170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.821302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.821336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.821564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.821592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.821771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.821831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.822001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.822052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.822263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.822292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.822463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.822491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.822626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.822661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.822790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.822842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.823037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.823089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.823302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.823351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.823553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.823581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-10-08 21:05:01.823735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-10-08 21:05:01.823789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.823919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.823984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.824146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.824199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.824332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.824360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.824490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.824519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.824739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.824790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.824909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.824970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.825128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.825293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.825452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.825698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.825865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.825988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.826193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.826349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.826562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.826725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.826908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.826959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.827152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.827203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.827375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.827404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.827528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.827557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.827721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.827779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.827939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.827989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.828172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.828224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.828470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.828500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.828735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.828795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.828935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.828988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.829153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.829205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.829422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.829451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.829630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.829678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.829839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.829898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.830098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.830151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.830339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.830390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.830635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.830696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.830898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.830941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.831141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.831189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.831335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.831382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.831506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.831535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.831745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.831853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.832182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-10-08 21:05:01.832252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-10-08 21:05:01.832560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.832630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.832847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.832928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.833263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.833345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.833633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.833737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.833943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.833973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.834261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.834326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.834573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.834689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.834871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.834901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.835069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.835143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.835440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.835506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.835750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.835784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.835960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.836024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.836323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.836404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.836710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.836740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.836934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.837001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.837290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.837357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.837634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.837721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.837866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.837931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.838200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.838277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.838594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.838680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.838866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.838945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.839202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.839268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.839503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.839568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.839835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.839865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.840044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.840111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.840333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.840400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.840632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.840713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.840869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.840900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.841135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.841164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.841317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.841390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.841679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.841745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.841840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.841870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.842047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.842115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.842367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.842433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.842723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.842760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.842895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.842976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.843235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.843306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.843563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.843630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.843823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.843852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.844063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.844131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-10-08 21:05:01.844355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-10-08 21:05:01.844419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.844694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.844734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.844937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.845006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.845243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.845276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.845460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.845525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.845758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.845788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.845907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.845938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.846106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.846172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.846414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.846494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.846746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.846776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.846900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.846973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.847229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.847293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.847588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.847673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.847855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.847885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.848116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.848182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.848494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.848562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.848820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.848857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.849071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.849140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.849372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.849436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.849721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.849759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.849919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.849991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.850251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.850282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.850454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.850525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.850781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.850850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.851086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.851118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.851300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.851371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.851638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.851726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.851937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.851967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.852131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.852197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.852430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.852513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.852748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.852778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.852935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.853010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.853206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.853272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.853515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.853545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.853701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.853769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-10-08 21:05:01.853947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-10-08 21:05:01.854024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.854248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.854279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.854432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.854498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.854722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.854791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.854995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.855024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.855203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.855280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.855482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.855558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.855779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.855811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.855994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.856059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.856271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.856347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.856624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.856708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.856839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.856868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.857104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.857172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.857421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.857450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.857582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.857670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.857838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.857867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.858020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.858053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.858211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.858281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.858504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.858568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.858822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.858853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.859000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.859066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.859249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.859317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.859518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.859547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.859702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.859784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.860013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.860080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.860308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.860337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.860512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.860580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.860833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.860899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.861139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.861170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.861294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.861359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.861558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.861623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.861827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.861858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.861961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.861990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.862219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.862297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.862533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.862563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.862751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.862820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.863079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.863148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.863388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.863417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.863578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.863701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.863933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.863999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-10-08 21:05:01.864232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-10-08 21:05:01.864265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.864443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.864508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.864734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.864766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.864939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.864969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.865140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.865207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.865462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.865529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.865727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.865763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.865925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.865993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.866203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.866286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.866493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.866523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.866682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.866767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.866995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.867063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.867287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.867324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.867477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.867543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.867814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.867883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.868114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.868144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.868289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.868354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.868604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.868710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.868953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.868981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.869144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.869211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.869460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.869527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.869776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.869807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.869959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.870025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.870276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.870361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.870589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.870618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.870813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.870883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.871121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.871187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.871418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.871448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.871593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.871696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.871835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.871865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.872061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.872090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.872238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.872307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.872544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.872609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.872872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.872904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.873049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.873115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.873323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.873388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.873610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.873639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.873827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.873893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.874138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.874206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-10-08 21:05:01.874455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-10-08 21:05:01.874484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.874675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.874745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.874985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.875050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.875275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.875307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.875475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.875540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.875760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.875828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.876032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.876062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.876213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.876289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.876509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.876584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.876838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.876868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.877007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.877072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.877332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.877400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.877644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.877683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.877822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.877890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.878119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.878185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.878412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.878447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.878628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.878716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.878853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.878886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.879112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.879142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.879289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.879355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.879582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.879672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.879895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.879924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.880101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.880170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.880411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.880477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.880723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.880759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.880909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.880975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.881214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.881283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.881509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.881538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.881641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.881719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.881929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.881994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.882200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.882229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.882328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.882374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.882608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.882692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.882844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.882873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.883058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.883159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.883412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.883480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.883731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.883763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.883915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.883980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.884182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.884245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.884473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.884537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-10-08 21:05:01.884789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-10-08 21:05:01.884821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.884948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.885013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.885220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.885249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.885403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.885468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.885692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.885759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.885992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.886020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.886147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.886212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.886446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.886510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.886754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.886784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.886930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.886995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.887231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.887296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.887521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.887550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.887714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.887817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.888063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.888132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.888354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.888382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.888527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.888590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.888837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.888866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.889023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.889051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.889195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.889258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.889483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.889547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.889796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.889825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.890019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.890119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.890369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.890439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.890644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.890682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.890783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.890852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.891057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.891122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.891331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.891360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9444000b90 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.891528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.891595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.891851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.891880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.892009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.892037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.892196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.892259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.892467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.892531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.892771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.892800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.892943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.893007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.893234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.893310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-10-08 21:05:01.893538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-10-08 21:05:01.893566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.893743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.893809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.894036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.894100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.894326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.894354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.894499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.894562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.894808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.894837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.894928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.894956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.895083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.895132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.895362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.895426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.895649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.895683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.895864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.895927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.896153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.896216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.896441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.896469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.896619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.896698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.896908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.896971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.897171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.897198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.897375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.897439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.897682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.897746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.897973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.898001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.898170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.898233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.898458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.898520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.898751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.898780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.898935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.898999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.899236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.899299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.899528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.899556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.899732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.899797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.900025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.900089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.900307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.900335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.900487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.900551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.900790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.900819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.900974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.901002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.901138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.901202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.901401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.901464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.901700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.901729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.901876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.901939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.902109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.902172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.902407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.902435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.902572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.902635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.902896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.902959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.903163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.903191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.903333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-10-08 21:05:01.903407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-10-08 21:05:01.903607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.903689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.903924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.903952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.904122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.904184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.904386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.904448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.904665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.904694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.904831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.904895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.905101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.905164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.905367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.905395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.905526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.905604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.905869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.905933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.906131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.906159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.906337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.906400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.906637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.906713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.906852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.906881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.907036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.907099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.907328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.907391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.907615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.907643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.907777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.907840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.908067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.908131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.908357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.908385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.908531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.908594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.908840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.908904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.909140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.909168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.909340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.909403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.909638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.909720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.909968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.909996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.910138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.910212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.910444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.910509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.910713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.910742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.910918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.910982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.911207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.911271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.911503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.911531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.911703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.911768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.911991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.912055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.912280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.912308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.912455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.912519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.912722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.912788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.912986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.913014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.913109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.913167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.913372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.913436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.913683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-10-08 21:05:01.913712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-10-08 21:05:01.913844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.913872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.914103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.914167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.914369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.914398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.914573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.914635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.914822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.914850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.914970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.914999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.915089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.915117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.915362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.915425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.915635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.915671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.915841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.915905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.916135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.916198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.916380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.916408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.916535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.916599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.916849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.916913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.917142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.917170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.917344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.917406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.917604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.917685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.917922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.917950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.918124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.918188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.918425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.918488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.918693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.918722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.918895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.918958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.919185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.919249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.919451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.919478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.919627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.919703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.919908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.919972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.920212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.920240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.920384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.920447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.920672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.920737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.920978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.921006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.921180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.921243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.921470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.921534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.921767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.921795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.921970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.922033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.922236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.922300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.922552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.922614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.922784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.922812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.922955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.923019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.923208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.923236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.923408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.923472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-10-08 21:05:01.923727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-10-08 21:05:01.923793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.924005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.924034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.924184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.924247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.924475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.924539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.924798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.924827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.924945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.925009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.925231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.925294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.925522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.925550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.925704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.925769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.925997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.926060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.926284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.926312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.926475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.926539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.926761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.926826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.927062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.927091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.927207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.927271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.927476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.927539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.927751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.927780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.927953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.928017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.928217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.928280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.928511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.928574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.928806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.928835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.928984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.929046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.929247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.929275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.929449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.929513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.929730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.929759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.929878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.929906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.930074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.930138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.930374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.930437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.930642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.930678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.930840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.930903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.931130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.931193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.931418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.931446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.931620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.931703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.931944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.932007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.932244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.932272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.932449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.932512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.932744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.932808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.933055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.933083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.933223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.933286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.933488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.933550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.933799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-10-08 21:05:01.933833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-10-08 21:05:01.933982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.934045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.934280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.934342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.934570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.934598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.934762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.934828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.935059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.935122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.935323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.935351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.935524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.935587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.935846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.935911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.936134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.936162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.936305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.936368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.936598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.936677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.936859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.936887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.937039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.937102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.937343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.937407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.937609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.937638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.937823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.937886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.938124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.938187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.938412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.938440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.938589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.938672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.938913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.938976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.939201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.939229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.939391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.939454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.939671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.939735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.939975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.940003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.940140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.940203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.940413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.940476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.940691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.940725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.940875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.940939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.941109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.941172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.941397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.941425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.941597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.941671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.941898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.941962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.942158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.942186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.942311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.942383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.942582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.942646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-10-08 21:05:01.942893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-10-08 21:05:01.942921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.943086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.943150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.943350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.943413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.943639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.943724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.943888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.943957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.944204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.944267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.944482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.944544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.944782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.944811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.944985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.945048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.945274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.945302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.945450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.945513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.945730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.945759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.945926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.945955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.946094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.946157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.946382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.946446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.946677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.946706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.946853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.946917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.947153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.947217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.947447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.947479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.947627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.947710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.947922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.947985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.948210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.948238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.948333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.948407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.948633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.948710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.948956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.948985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.949126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.949189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.949424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.949487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.949656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.949685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.949827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.949898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.950125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.950188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.950415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.950443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.950595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.950672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.950921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.950985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.951184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.951212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.951383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.951447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.951685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.951749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.951985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.952013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.952151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.952215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.952438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.952501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.952706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.952735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.952907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-10-08 21:05:01.952971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-10-08 21:05:01.953168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.953231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.953470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.953533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.953732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.953761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.953886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.953958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.954162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.954190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.954359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.954423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.954640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.954722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.954960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.954987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.955151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.955214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.955442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.955506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.955732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.955761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.955908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.955972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.956206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.956269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.956493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.956521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.956697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.956761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.956967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.957030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.957264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.957292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.957436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.957498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.957728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.957802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.958005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.958034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.958212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.958275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.958500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.958563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.958791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.958819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.958945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.959022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.959261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.959323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.959547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.959575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.959731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.959796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.960021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.960084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.960313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.960341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.960495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.960558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.960806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.960834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.960957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.960986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.961141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.961206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.961433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.961496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.961723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.961752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.961929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.961991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.962222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.962285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.962507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.962535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.962695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.962759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.962994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.963057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-10-08 21:05:01.963306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-10-08 21:05:01.963334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.963435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.963486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.963717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.963782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.964011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.964039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.964186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.964248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.964476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.964548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.964796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.964824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.964970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.965032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.965264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.965326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.965554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.965581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.965682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.965761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.965960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.966023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.966253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.966281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.966420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.966483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.966712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.966776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.966975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.967003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.967176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.967238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.967465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.967528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.967719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.967747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.967908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.967981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.968207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.968267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.968490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.968553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.968822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.968851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.969044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.969108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.969334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.969362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.969540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.969603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.969843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.969871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.970025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.970053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.970183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.970210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.970443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.970506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.970740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.970768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.970940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.971004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.971232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.971305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.971505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.971532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.971703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.971767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.971970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.972033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.972259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.972286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.972463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.972526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.972754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.972819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.973015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.973043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.973217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-10-08 21:05:01.973281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-10-08 21:05:01.973481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.973545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.973790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.973819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.973948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.974011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.974210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.974272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.974472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.974499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.974667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.974730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.974964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.975024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.975224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.975251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.975412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.975473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.975699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.975761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.975989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.976016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.976131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.976193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.976422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.976485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.976707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.976735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.976913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.976977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.977222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.977287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.977532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.977594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.977840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.977869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.978056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.978119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.978359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.978387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.978530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.978593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.978794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.978822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.978981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.979008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.979143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.979204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.979430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.979493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.979703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.979732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.979879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.979940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.980163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.980225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.980452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.980479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.980574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.980641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.980828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.980889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.981061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.981088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.981219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.981262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.981459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.981520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.981699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.981727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.981850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-10-08 21:05:01.981877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-10-08 21:05:01.982074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.982138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.982330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.982359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.982536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.982599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.982816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.982880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.983116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.983144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.983321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.983385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.983608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.983689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.983892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.983920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.984068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.984131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.984330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.984394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.984642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.984734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.984862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.984908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.985134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.985197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.985398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.985427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.985572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.985633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.985841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.985868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.986023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.986050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.986182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.986244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.986476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.986540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.986785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.986813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.986985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.987049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.987251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.987314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.987527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.987555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.987732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.987806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.988009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.988072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.988273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.988301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.988424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.988499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.988721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.988787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.989013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.989041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.989190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.989251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.989425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.989487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.989730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.989759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.989912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.989975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.990200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.990263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.990478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.990507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.990664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.990726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.990953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.991014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.991224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.991253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.991379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.991457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.991673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.991737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-10-08 21:05:01.991973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-10-08 21:05:01.992000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.992168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.992230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.992457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.992517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.992744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.992772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.992925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.992985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.993160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.993222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.993477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.993538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.993724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.993752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.993850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.993879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.994101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.994130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.994280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.994352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.994557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.994621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.994878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.994906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.995063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.995125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.995349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.995412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.995636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.995670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.995814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.995876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.996079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.996143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.996364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.996392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.996542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.996603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.996859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.996923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.997149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.997176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.997276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.997351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.997554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.997615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.997843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.997870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.998046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.998109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.998335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.998396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.998592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.998620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.998800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.998864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.999100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.999164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.999366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.999394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.999572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.999636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:01.999892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:01.999955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.000168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.000196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.000345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.000408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.000636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.000714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.000870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.000899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.001049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.001120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.001350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.001414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.001611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.001638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.001818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.001879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-10-08 21:05:02.002114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-10-08 21:05:02.002174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.002396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.002423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.002576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.002637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.002859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.002920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.003145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.003172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.003337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.003397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.003599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.003676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.003905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.003932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.004075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.004138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.004365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.004426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.004635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.004678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.004856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.004919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.005094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.005158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.005385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.005413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.005554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.005617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.005835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.005899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.006060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.006088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.006216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.006243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.006441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.006501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.006716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.006745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.006873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.006943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.007140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.007204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.007384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.007412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.007563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.007633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.007838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.007865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.008019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.008047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.008192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.008255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.008485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.008548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.008762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.008790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.008897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.008956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.009158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.009219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.009420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.009447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.009620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.009695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.009879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.009940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.010165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.010192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.010340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.010403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.010598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.010679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.010912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.010944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.011118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.011182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.011460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.011523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-10-08 21:05:02.011752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-10-08 21:05:02.011781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.011956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.012020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.012252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.012314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.012540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.012568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.012714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.012779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.012955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.013019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.013242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.013271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.013411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.013475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.013703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.013768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.013977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.014005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.014180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.014243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.014451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.014515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.014692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.014720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.014872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.014932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.015136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.015197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.015401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.015429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.015549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.015617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.015864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.015927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.016139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.016167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.016316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.016379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.016584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.016647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.016870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.016898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.017025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.017092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.017320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.017380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.017599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.017688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.017866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.017920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.018157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.018220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.018460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.018523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.018746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.018775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.018897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.018971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.019188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.019216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.019365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.019426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.019623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.019698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.019933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.019961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.020105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.020166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.020367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.020428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.020591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-10-08 21:05:02.020618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-10-08 21:05:02.020775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.020830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.021049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.021113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.021350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.021378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.021553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.021617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.021885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.021951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.022184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.022212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.022350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.022414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.022642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.022724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.022956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.022983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.023124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.023187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.023387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.023450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.023639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.023675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.023775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.023802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.024007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.024070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.024266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.024300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.024564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.024627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.024952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.025016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.025216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.025245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.025420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.025483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.025699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.025729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.025855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.025883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.026045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.026108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.026333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.026396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.026627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.026660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.026812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.026875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.027105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.027168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.027367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.027395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.027542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.027605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.027862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.027926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.028152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.028180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.028330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.028393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.028620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.028696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.028911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.028939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.029039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.029089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.029318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.029379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.029580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.029607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.029714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.029763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.029992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.030055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.030255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.030283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.030429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.030491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-10-08 21:05:02.030732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-10-08 21:05:02.030797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.031032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.031059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.031239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.031303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.031509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.031571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.031779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.031808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.031984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.032047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.032271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.032334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.032566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.032629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.032833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.032861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.033057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.033120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.033343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.033371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.033518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.033580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.033813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.033842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.033996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.034023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.034155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.034215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.034445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.034508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.034737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.034766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.034932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.034995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.035221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.035284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.035511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.035539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.035703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.035768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.035971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.036033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.036265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.036294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.036457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.036518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.036745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.036810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.037041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.037069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.037209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.037272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.037472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.037533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.037722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.037750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.037886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.037946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.038154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.038215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.038431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.038459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.038593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.038671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.038892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.038956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.039155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.039184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.039360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.039423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.039619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.039697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.039942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.039970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.040104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.040166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.040394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.040457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.040618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-10-08 21:05:02.040645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-10-08 21:05:02.040782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.040811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.040963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.041035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.041267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.041294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.041435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.041496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.041702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.041730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.041888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.041917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.042059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.042121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.042348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.042411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.042642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.042680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.042819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.042883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.043106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.043169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.043398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.043426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.043574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.043638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.043886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.043949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.044152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.044180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.044364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.044428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.044627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.044728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.044960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.044988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.045141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.045204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.045407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.045471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.045702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.045731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.045858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.045921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.046153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.046217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.046405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.046434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.046559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.046638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.046881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.046944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.047178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.047207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.047356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.047419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.047622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.047711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.047947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.047975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.048094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.048158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.048383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.048446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.048666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.048695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.048825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.048852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.049000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.049061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.049298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.049327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.049500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.049564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.049782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.049811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.049965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.049993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.050145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.050206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-10-08 21:05:02.050407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-10-08 21:05:02.050468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.050699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.050727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.050880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.050941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.051148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.051209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.051438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.051465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.051638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.051716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.051916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.051979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.052172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.052200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.052372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.052433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.052636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.052719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.052838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.052866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.053025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.053053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.053266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.053330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.053556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.053584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.053734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.053799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.054022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.054086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.054297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.054326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.054496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.054559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.054806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.054871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.055101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.055129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.055224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.055298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.055502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.055566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.055808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.055837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.055980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.056041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.056217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.056278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.056540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.056603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.056819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.056848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.057017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.057079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.057268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.057296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.057403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.057431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.057695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.057724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.057882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.057910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.058030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.058093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.058322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.058386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.058607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.058635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.058761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.058825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.059062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.059125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.059359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.059388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.059513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.059574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.059816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.059881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-10-08 21:05:02.060089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-10-08 21:05:02.060117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.060276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.060336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.060561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.060624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.060882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.060910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.061058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.061119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.061317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.061378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.061584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.061611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.061770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.061832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.062057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.062118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.062323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.062351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.062472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.062552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.062808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.062873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.063104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.063131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.063269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.063332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.063533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.063594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.063850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.063877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.064027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.064100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.064267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.064331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.064550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.064613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.064806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.064834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.064973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.065036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.065266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.065293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.065467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.065530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.065737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.065803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.065967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.065994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.066124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.066152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.066368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.066431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.066661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.066690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.066869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.066932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.067160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.067223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.067404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.067432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.067605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.067702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.067935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.067999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.068222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.068250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.068401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.068464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.068691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.068757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-10-08 21:05:02.068944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-10-08 21:05:02.068972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.069144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.069208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.069412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.069476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.069706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.069734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.069878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.069940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.070143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.070204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.070445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.070472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.070597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.070692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.070928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.070989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.071221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.071248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.071391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.071452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.071670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.071721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.071852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.071880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.071977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.072046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.072273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.072336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.072546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.072574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.072727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.072790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.072970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.073031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.073232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.073259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.073418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.073481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.073703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.073767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.074013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.074040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.074207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.074271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.074517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.074579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.074826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.074855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.075022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.075083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.075283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.075345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.075582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.075610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.075757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.075821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.076055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.076117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.076313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.076340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.076512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.076575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.076826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.076888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.077114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.077141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.077319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.077392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.077627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.077708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.077938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.077965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.078111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.078172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.078376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.078437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.078682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.078731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.078861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-10-08 21:05:02.078888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-10-08 21:05:02.079095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.079156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.079355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.079382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.079505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.079569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.079814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.079843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.079944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.079972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.080104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.080179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.080411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.080471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.080701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.080729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.080852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.080925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.081131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.081194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.081426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.081453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.081595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.081671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.081914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.081978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.082213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.082241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.082391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.082454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.082679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.082772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.083012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.083043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.083200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.083229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.083388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.083425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.083604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.083633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.083816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.083880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.084122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.084186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.084414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.084454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.084627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.084669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.084853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.084883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.085013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.085041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.085182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.085213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.085371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.085439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.085710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.085740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.085835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.085863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.086022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.086084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.086288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.086318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.086486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.086517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-10-08 21:05:02.086675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-10-08 21:05:02.086705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.662 [2024-10-08 21:05:02.086812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-10-08 21:05:02.086840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-10-08 21:05:02.086997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-10-08 21:05:02.087026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.087178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.087206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.087353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.087398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.087528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.087590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.087735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.087766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.087872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.087902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.088034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.088063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.088234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.088283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.088487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.088556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.088797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.088831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.088968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.089030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.089229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.089300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.089442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.089476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.089630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.089693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.089818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.089853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.089974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.090165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.090337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.090548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.090751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.090941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.090972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.091892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.091925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.092044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.092101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.092256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.092315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.092448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.092479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.092613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.092642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.092820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.092891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.093102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.093168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.093387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.093453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.093658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.093688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.093841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.093871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.093966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.093996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.094175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.094240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.094457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.094522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.094725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-10-08 21:05:02.094761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-10-08 21:05:02.094897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.094926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.095970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.095999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.096158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.096187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.096293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.096322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.096454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.096524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.096665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.096697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.096860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.096891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.097025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.097093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.097348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.097414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.097617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.097701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.097856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.097885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.097991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.098020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.098129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.098158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.098322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.098387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.098595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.098677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.098840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.098869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.099955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.099985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.100934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.100964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.101962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.101991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-10-08 21:05:02.102119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-10-08 21:05:02.102153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.102281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.102311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.102438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.102468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.102633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.102717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.102812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.102841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.102996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.103900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.103930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.104059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.104089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.104232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.104297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.104508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.104574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.104805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.104835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.104929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.105001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.105208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.105273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.105518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.105583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.105824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.105854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.106020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.106084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.106288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.106354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.106556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.106622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.106820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.106850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.107005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.107034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.107201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.107265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.107468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.107533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.107754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.107785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.107951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.108016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.108216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.108281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.108540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.108605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.108801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.108831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.108956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.109020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.109243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.109273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.109420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.109486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.109714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.109744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.109899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.109929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.110070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.110135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.110371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.110436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.110669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.110726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-10-08 21:05:02.110853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-10-08 21:05:02.110913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.111149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.111215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.111444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.111510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.111741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.111771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.111920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.111985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.112159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.112188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.112291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.112321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.112460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.112524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.112754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.112784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.112930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.112996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.113228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.113293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.113515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.113580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.113797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.113827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.113974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.114039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.114308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.114373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.114580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.114646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.114836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.114866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.115013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.115043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.115183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.115247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.115480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.115546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.115801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.115831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.115974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.116039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.116275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.116340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.116577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.116607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.116753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.116819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.117030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.117095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.117324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.117354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.117536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.117601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.117853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.117919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.118120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.118150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.118317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.118383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.118610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.118689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.118922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.118952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.119076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.119135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.119367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.119432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.119664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.119693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.119848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.119915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.120144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.120209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.120405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.120434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.120616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.120698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-10-08 21:05:02.120932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-10-08 21:05:02.121009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.121244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.121273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.121441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.121506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.121739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.121769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.121928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.121957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.122098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.122162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.122398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.122464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.122696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.122727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.122855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.122919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.123127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.123191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.123399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.123429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.123606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.123686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.123931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.123997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.124201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.124231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.124341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.124408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.124614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.124696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.124932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.124962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.125108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.125173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.125413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.125478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.125701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.125731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.125908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.125973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.126203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.126269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.126508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.126537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.126672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.126738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.126971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.127037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.127246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.127275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.127426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.127491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.127734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.127801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.127971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.128000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.128105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.128134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.128333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.128398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.128677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.128728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.128889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.128955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.129167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.129233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-10-08 21:05:02.129483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-10-08 21:05:02.129547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.129790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.129820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.129948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.130014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.130241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.130271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.130416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.130482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.130714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.130781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.131002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.131037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.131227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.131293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.131468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.131534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.131770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.131800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.131954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.132019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.132261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.132326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.132555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.132584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.132755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.132821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.133042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.133107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.133305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.133334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.133456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.133529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.133773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.133840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.134068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.134097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.134250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.134316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.134567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.134633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.134861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.134890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.135034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.135099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.135306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.135371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.135588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.135671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.135874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.135934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.136134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.136200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.136404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.136434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.136611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.136707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.136840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.136870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.136998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.137027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.137183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.137248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.137491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.137557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.137792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.137823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.137969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.138034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.138260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.138325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.138523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.138552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.138681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.138754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.138997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.139062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.139289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.139318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-10-08 21:05:02.139443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-10-08 21:05:02.139498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.139719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.139786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.139990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.140019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.140161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.140227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.140465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.140531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.140747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.140777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.140930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.141006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.141221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.141287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.141485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.141515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.141692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.141757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.141989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.142054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.142251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.142281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.142390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.142450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.142714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.142782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.142995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.143025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.143180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.143246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.143448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.143512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.143720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.143750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.143877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.143948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.144135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.144199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.144416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.144481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.144698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.144729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.144884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.144954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.145169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.145198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.145374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.145440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.145665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.145731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.145936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.145965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.146121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.146190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.146429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.146495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.146725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.146755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.146919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.146948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.147074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.147104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.147237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.147266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.147396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.147462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.147681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.147738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.147858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.147888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.148019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.148049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.148204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.148233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.148366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.148396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.148487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.148517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-10-08 21:05:02.148656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-10-08 21:05:02.148686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.148814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.148843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.148975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.149887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.149917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.150882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.150915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.151116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.151319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.151553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.151724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.151865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.151970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.152839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.152991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9440000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.153956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.153986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.154140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.154205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.154380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.154445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.154694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.154724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.154851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.154881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.155036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.155065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.155184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.155213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.155336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-10-08 21:05:02.155400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-10-08 21:05:02.155633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.155721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.155829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.155859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.155987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.156016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.156116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.156145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.156302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.156331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.156494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.156575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.156836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.156866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.157034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.157099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.157316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.157380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.157584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.157648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.157821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.157851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.157956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.157986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.158162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.158227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.158459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.158523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.158728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.158758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.158893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.158972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.159167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.159232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.159442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.159507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.159728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.159758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.159873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.159932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.160139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.160168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.160337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.160401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.160604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.160709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.160828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.160857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.161033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.161097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.161301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.161366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.161614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.161704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.161838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.161868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.162049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.162115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.162378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.162444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.162668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.162725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.162828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.162857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.162961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.162995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.163167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.163232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.163433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.163499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-10-08 21:05:02.163709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-10-08 21:05:02.163740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.163844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.163874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.164043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.164108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.164373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.164439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.164679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.164730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.164862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.164930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.165136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.165166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.165265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.165323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.165551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.165616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.165843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.165872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.166023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.166088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.166338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.166403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.166615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.166644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.166792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.166822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.166970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.167035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.167262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.167327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.167555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.167620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.167837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.167866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.168058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.168237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.168469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.168683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.168819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.168973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.169038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.169276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.169341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.169573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.169638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.169852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.169882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.170075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.170105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.170240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.170310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.170485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.170552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.170779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.170808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.170937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.170967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.171136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.171202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.171430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.171495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.171725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.171755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.171889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.171962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.172171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.172201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.172366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.172442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.172623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.172718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.172822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.172852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-10-08 21:05:02.173032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-10-08 21:05:02.173096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.173307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.173371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.173591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.173666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.173862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.173892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.174142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.174206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.174454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.174518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.174740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.174770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.174865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.174895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.175100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.175129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.175259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.175339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.175568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.175633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.175868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.175898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.176001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.176062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.176297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.176361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.176595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.176624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.176770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.176836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.177067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.177132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.177298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.177327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.177458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.177503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.177717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.177784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.178025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.178054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.178230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.178295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.178542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.178608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.178854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.178884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.179033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.179098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.179310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.179376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.179608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.179636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.179818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.179882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.180093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.180158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.180381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.180410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.180516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.180590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.180836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.180902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.181136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.181166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.181303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.181368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.181601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.181683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.181861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.181890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.182059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.182124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.182344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.182419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.182668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.182698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.182825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.182890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-10-08 21:05:02.183064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-10-08 21:05:02.183128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.183384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.183413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.183579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.183643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.183901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.183966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.184169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.184198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.184360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.184424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.184667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.184734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.184974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.185004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.185149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.185213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.185422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.185486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.185686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.185717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.185827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.185895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.186096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.186161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.186371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.186401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.186550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.186615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.186842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.186907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.187108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.187137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.187307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.187371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.187602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.187685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.187926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.187955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.188104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.188170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.188406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.188471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.188697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.188726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.188865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.188921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.189134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.189199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.189430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.189459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.189611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.189705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.189862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.189892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.190122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.190151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.190317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.190381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.190618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.190725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.190935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.190964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.191138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.191202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.191431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.191496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.191728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.191758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.191902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.191966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.192183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.192248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.192487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.192521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.192677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.192743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.192983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.193049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-10-08 21:05:02.193279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-10-08 21:05:02.193309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.193452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.193518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.193724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.193790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.194027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.194056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.194201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.194266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.194496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.194561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.194797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.194826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.195019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.195084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.195325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.195390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.195598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.195627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.195800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.195865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.196111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.196176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.196408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.196438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.196562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.196627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.196839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.196869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.197001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.197030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.197194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.197259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.197465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.197529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.197781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.197811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.197956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.198021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.198250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.198315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.198552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.198581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.198755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.198822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.199026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.199090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.199325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.199354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.199480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.199544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.199787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.199854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.200079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.200108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.200285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.200350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.200549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.200614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.200832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.200861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.201011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.201076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.201279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.201345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.201552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.201581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.201761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.201827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.202029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.202095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.202329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.202358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-10-08 21:05:02.202453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-10-08 21:05:02.202539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.202760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.202827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.203035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.203064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.203216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.203280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.203489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.203554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.203798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.203827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.203949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.204014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.204217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.204281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.204536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.204600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.204808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.204837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.204956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.205020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.205233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.205262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.205416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.205480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.205720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.205749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.205887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.205917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.206019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.206099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.206300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.206364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.206583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.206613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.206770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.206836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.207071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.207136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.207347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.207377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.207548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.207612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.207835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.207900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.208129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.208159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.208307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.208370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.208602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.208680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.208918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.208948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.209112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.209176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.209403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.209468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.209707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.209737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.209911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.209976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.210215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.210280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.210474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.210504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.210647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.210726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.210939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.211004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.211230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.211260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.211407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.211471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.211643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.211720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.211958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.211987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-10-08 21:05:02.212163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-10-08 21:05:02.212228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.212429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.212504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.212712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.212742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.212921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.212986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.213193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.213258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.213507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.213572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.213815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.213845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.214034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.214099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.214311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.214341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.214487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.214552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.214765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.214795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.214953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.214982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.215154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.215218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.215403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.215467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.215642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.215679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.215794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.215822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.216059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.216123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.216325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.216354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.216460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.216525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.216765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.216831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.217067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.217096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.217274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.217338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.217536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.217601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.217848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.217877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.218043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.218107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.218349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.218413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.218619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.218654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.218808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.218872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.219093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.219158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.219362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.219392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.219562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.219626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.219881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.219945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.220178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.220207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.220385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.220450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.220669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.220735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.220964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.220994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.221089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.221161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.221362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.221427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.221623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.221709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.221842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.221871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-10-08 21:05:02.222026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-10-08 21:05:02.222091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.222299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.222333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.222504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.222570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.222819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.222849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.222953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.222982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.223108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.223180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.223386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.223451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.223664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.223694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.223799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.223869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.224077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.224142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.224373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.224402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.224547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.224610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.224863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.224928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.225133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.225163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.225266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.225326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.225579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.225644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.225854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.225884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.226013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.226084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.226262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.226326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.226527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.226556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.226659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.226717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.226911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.226976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.227181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.227211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.227361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.227426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.227675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.227742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.227943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.227972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.228150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.228215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.228458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.228523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.228776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.228805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.228953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.229017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.229223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.229288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.229538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.229602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.229841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.229871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.230082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.230147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.230358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.230387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.230489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.230552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.230781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.230811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.230941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.230970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.231113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.231178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.231382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.231446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-10-08 21:05:02.231663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-10-08 21:05:02.231693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.231863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.231939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.232179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.232243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.232443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.232472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.232625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.232708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.232925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.232989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.233190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.233219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.233370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.233435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.233638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.233719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.233924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.233953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.234102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.234166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.234366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.234431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.234608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.234637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.234819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.234883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.235117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.235182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.235400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.235429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.235599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.235680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.235919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.235983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.236205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.236234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.236409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.236474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.236707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.236795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.236989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.237019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.237145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.237208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.237435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.237501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.237734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.237764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.237922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.237987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.238212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.238277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.238504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.238569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.238810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.238840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.239001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.239066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.239249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.239278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.239456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.239521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.239709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.239739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.239898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.239927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.240079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.240144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.240383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.240448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.240682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.240713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.240852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.240916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.241146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.241211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.241444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.241474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-10-08 21:05:02.241627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-10-08 21:05:02.241708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.241948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.242024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.242230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.242260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.242431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.242495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.242695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.242762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.242989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.243019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.243160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.243225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.243453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.243518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.243763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.243793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.243962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.244027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.244235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.244300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.244518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.244548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.244645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.244731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.244962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.245028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.245264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.245293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.245475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.245540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.245763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.245830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.246043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.246072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.246243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.246307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.246523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.246588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.246842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.246872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.246989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.247053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.247288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.247352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.247552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.247582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.247732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.247799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.248014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.248079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.248282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.248311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.248412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.248470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.248693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.248759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.248994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.249024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.249150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.249215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.249447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.249512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.249716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.249745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.249871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.249943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.250150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.250216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.250454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.250483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-10-08 21:05:02.250631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-10-08 21:05:02.250710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.250807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.250836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.250965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.250994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.251163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.251229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.251458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.251523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.251757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.251792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.251939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.252004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.252232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.252298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.252523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.252552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.252684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.252751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.252957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.253022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.253265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.253294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.253431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.253496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.253681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.253748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.253918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.253947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.254099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.254178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.254407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.254473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.254700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.254730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.254854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.254919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.255142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.255207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.255435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.255465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.255560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.255636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.255875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.255941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.256149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.256178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.256329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.256393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.256633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.256712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.256952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.256982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.257124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.257190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.257388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.257453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.257693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.257744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.257921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.258013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.258248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.258317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.258579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.258648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.258843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.258873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.259078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.259144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.259351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.259381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.259483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.259541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-10-08 21:05:02.259784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-10-08 21:05:02.259815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.259924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.259954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.260132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.260197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.260401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.260466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.260690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.260720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.260869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.260935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.261138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.261203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.261443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.261473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.261622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.261702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.261920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.261986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.262194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.262224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.262374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.262438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.262618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.262701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.262914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.262944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.263951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.263981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.264114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.264179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.264402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.264432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.265442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.265519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.265741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.265772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.265902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.265933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.266036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.266115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.266345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.266410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.266640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.266678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.266827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.266893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.267106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.267172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.267381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.267411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.267561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.267626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.267846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.267911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.268109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.268139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.268280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.268346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.268524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.268600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.268854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.268885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.268999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.269076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.269276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.269341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.269569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-10-08 21:05:02.269598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-10-08 21:05:02.269725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.269791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.270039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.270104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.270321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.270350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.270525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.270590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.270821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.270852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.270950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.270979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.271068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.271109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.271326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.271390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.271667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.271697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.271872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.271938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.272165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.272229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.272425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.272454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.272609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.272689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.272926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.273002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.273213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.273242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.273398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.273462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.273737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.273803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.274083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.274113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.274278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.274343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.274539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.274603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.274809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.274839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.274941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.274970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.275195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.275261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.275441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.275471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.275579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.275609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.275855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.275920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.276223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.276253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.276475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.276539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.276767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.276833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.277024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.277054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.277235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.277300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.277508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.277572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.277803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.277834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.277938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.278008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.278213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.278284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.278524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.278599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.278805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.278835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.278980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.279045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.279275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.279304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.279448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.279513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.279806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.279872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.280122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.280151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.280265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.280331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.280546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-10-08 21:05:02.280609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-10-08 21:05:02.280803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.280832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.280927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.280957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.281207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.281277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.281544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.281573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.281744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.281811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.282058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.282124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.282331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.282361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.282529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.282594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.282803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.282870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.283059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.283088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.283229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.283303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.283535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.283598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.283788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.283817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.283941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.283999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.284205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.284276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.284617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.284720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.284828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.284857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.284989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.285052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.285410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.285440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.285669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.285720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.285821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.285850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.286002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.286031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.286164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.286229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.286466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.286532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.286776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.286806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.286959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.287025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.287257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.287321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.287530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.287559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.287736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.287803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.288017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.288081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.288367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.288396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.288589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.288696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.288913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.288987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.289233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.289262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.289401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.289465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.289700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.289765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.290039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-10-08 21:05:02.290068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-10-08 21:05:02.290216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.290281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.290550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.290613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.290818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.290848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.291022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.291087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.291276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.291348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.291562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.291591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.291757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.291821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.292059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.292123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.292361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.292390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.292534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.292599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.292811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.292840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.292968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.293007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.293139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.293202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.293416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.293481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.293711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.293741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.293891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.293955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.294191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.294256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.294518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.294547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.294676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.294740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.294951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.295014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.295239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.295268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.295456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.295533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.295761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.295826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.296118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.296147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.296266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.296340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.296568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.296632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.296852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.296881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.297007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.297082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.297262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.297337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.297579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.297608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.297789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.297854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.298093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.298159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.298462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.298498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.298694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.298761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.299035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.299112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.299310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.299340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.299519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.299583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.299803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.299833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.300041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.300071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.300249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.300326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.300559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.300624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.300901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.300931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.301054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.301119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-10-08 21:05:02.301322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-10-08 21:05:02.301387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.301581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.301611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.301779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.301847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.302144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.302208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.302431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.302460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.302621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.302710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.302932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.302997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.303168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.303197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.303339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.303402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.303594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.303674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.303877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.303907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.304088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.304154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.304363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.304428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.304634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.304681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.304779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.304858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.305064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.305131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.305335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.305375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.305557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.305621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.305869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.305933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.306109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.306139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.306322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.306387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.306590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.306676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.306868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.306897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.307017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.307080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.307371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.307435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.307662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.307692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.307877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.307941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.308120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.308189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.308366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.308395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.308590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.308672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.308926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.308991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.309200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.309235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.309408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.309473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.309772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.309838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.310039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.310068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.310248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.310312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.310557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.310620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.310937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.310966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.311118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.311183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.311417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.311481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.311758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.311788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.311936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.312002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.312233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.312297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.312485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.312525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.312688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.312754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-10-08 21:05:02.313069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-10-08 21:05:02.313135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.313304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.313333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.313466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.313495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.313831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.313861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.314009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.314038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.314161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.314226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.314393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.314458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.314636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.314671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.314814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.314842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.315071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.315143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.315441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.315470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.315645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.315725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.315968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.316033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.316294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.316323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.316469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.316534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.316764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.316831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.317018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.317059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.317192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.317268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.317498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.317562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.317782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.317812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.317969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.318043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.318368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.318432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.318703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.318733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.318871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.318937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.319145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.319208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.319438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.319467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.319647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.319736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.319948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.320013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.320263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.320292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.320441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.320505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.320741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.320771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.320903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.320931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.321138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.321202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.321414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.321480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.321745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.321774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.321913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.321978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.322265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.322331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.322640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.322674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.322858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.322923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.323127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.323192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.323442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.323472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.323668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.323734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.323963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.324028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.324284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.324313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.324465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.324529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.324765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.324830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.325164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.325227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-10-08 21:05:02.325477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-10-08 21:05:02.325541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.325902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.325974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.326307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.326337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.326553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.326617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.326834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.326899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.327126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.327165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.327349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.327424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.327685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.327750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.328043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.328072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.328227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.328292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.328547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.328612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.328847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.328876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.329069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.329134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.329376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.329441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.329617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.329645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.329833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.329863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.330082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.330147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.330439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.330468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.330632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.330710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.331029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.331115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.331340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.331369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.331560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.331625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.331829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.331895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.332121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.332150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.332246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.332305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.332579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.332644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.332892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.332921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.333064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.333128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.333449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.333521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.333814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.333844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.334010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.334080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.334259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.334329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.334539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.334576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.334715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.334781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.335013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.335078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.335395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.335454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.335687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.335754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.336016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.336081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.336250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.336288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.336486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.336550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.336747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.336812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.337070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-10-08 21:05:02.337099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-10-08 21:05:02.337236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.337300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.337504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.337569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.337843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.337873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.338021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.338086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.338453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.338517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.338756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.338795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.338937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.339002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.339265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.339329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.339525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.339588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.339778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.339808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.339965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.340030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.340241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.340280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.340469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.340533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.340758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.340825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.341191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.341266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.341598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.341681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.341946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.342012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.342225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.342258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.342410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.342475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.342800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.342867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.343084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.343122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.343320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.343385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.343623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.343701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.343926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.343955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.344067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.344140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.344358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.344422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.344747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.344794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.345055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.345120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.345378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.345443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.345629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.345664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.345848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.345913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.346193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.346258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.346438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.346467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.346580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.346609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.346920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.346985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.347287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.347316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.347508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.347572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.347818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.347847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.348004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.348034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.348155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.348219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.348439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.348503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.348756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.348786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.348976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.349041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.349315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.349380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.349585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.349615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.349772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.349838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.350119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.350184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.350490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.350519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.350714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-10-08 21:05:02.350781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-10-08 21:05:02.350979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.351044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.351285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.351314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.351460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.351524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.351718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.351783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.352101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.352130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.352390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.352461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.352715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.352780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.352957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.352989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.353131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.353218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.353383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.353447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.353679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.353708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.353858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.353920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.354110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.354183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.354480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.354509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.354710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.354739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.354858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.354886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.355075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.355104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.355304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.355380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.355678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.355756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.356053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.356090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.356270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.356344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.356567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.356632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.356939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.356968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.357122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.357187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.357433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.357497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.357702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.357733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.357924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.357988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.358304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.358380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.358730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.358792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.359170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.359234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.359483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.359547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.359864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.359893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.360132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.360196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.360470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.360536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.360708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.360747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.360870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.360898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.361142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.361206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.361484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.361513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.361714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.361781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.362084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.362159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.362508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.362576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.362870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.362930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.363149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.363214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.363443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.363472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.363631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.363710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.363966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.364031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.364312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.364342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.364502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.364566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.364788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.364865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.365180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-10-08 21:05:02.365209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-10-08 21:05:02.365439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.365503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.365755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.365821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.366023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.366052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.366232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.366296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.366588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.366666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.366895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.366925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.367094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.367159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.367429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.367493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.367713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.367742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.367873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.367938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.368158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.368223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.368456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.368485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.368637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.368716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.368835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.368864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.369121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.369149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.369263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.369326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.369540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.369607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.369949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.369978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.370166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.370243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.370465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.370530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.370759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.370789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.370928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.370992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.371222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.371287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.371620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.371669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.371926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.371991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.372211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.372277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.372590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.372619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.372785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.372851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.373075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.373138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.373394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.373424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.373595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.373675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.373891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.373956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.374239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.374268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.374453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.374518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.374724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.374777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.375003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.375033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.375207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.375272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.375462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.375526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.375750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.375785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.375938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.376004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.376234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.376299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.376522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.376587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.376942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.377013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.377236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.377301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.377633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.377723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.377858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-10-08 21:05:02.377885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-10-08 21:05:02.378099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.378174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.378371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.378408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.378599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.378682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.378911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.378976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.379230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.379259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.379406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.379471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.379766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.379833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.380042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.380071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.380227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.380292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.380465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.380530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.380736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.380775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.380958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.381034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.381307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.381372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.381713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.381743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.381853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.381903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.382115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.382180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.382370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.382406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.382598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.382699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.382837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.382878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.383095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.383124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.383242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.383306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.383541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.383605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.383840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.383869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.384006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.384070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.384256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.384321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.384563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.384593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.384735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.384800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.385072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.385137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.385376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.385406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.385526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.385592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.385803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.385833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.385966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.385995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.386146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.386210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.386556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.386622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.386858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.386887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.387063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.387128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.387310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.387373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.387621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.387714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.387994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.388057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.388263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.388328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.388529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.388593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.388788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.388817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.388998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.389063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.389309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.389338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.389487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.389551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.389792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.389858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.390123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.390152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.390310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.390346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.390510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.390572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.390802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-10-08 21:05:02.390832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-10-08 21:05:02.391005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.391067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.391267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.391329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.391500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.391531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.391744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.391811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.392133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.392196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.392436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.392465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.392600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.392677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.392913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.392977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.393245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.393275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.393429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.393504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.393726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.393793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.394042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.394071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.394253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.394317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.394582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.394647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.394987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.395017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.395227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.395294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.395536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.395601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.395955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.396029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.396310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.396374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.396714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.396745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.396914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.396943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.397116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.397181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.397390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.397455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.397726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.397757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.397879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.397943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.398151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.398215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.398460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.398489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.398643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.398731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.399007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.399072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.399380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.399409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.399596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.399675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.399871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.399936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.400154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.400183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.400268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.400296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.400468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.400529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.400798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.400828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.401041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.401104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.401308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.401372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.401566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.401596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.401725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.401804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.402017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.402079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.402313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.402341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.402552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.402620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.402857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.402923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.403153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.403183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.403310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.403338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.403460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.403525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-10-08 21:05:02.403722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-10-08 21:05:02.403751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.403880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.403926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.404040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.404079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.404280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.404318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.404484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.404546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.404779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.404843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.405816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.405844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.406069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.406097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.406242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.406305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.406541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.406603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.406886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.406915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-10-08 21:05:02.407108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-10-08 21:05:02.407137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.407259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.407305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.407440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.407470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.407661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.407705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.407847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.407882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.408045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.408197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.408483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.408683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.408850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.408998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.409855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.409982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.410928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.410957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.411084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.411250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.411400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.411642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.411828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.411981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.412046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.412300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.412329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.412454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.971 [2024-10-08 21:05:02.412528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.971 qpair failed and we were unable to recover it. 00:37:33.971 [2024-10-08 21:05:02.412798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.412828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.412925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.412953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.413085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.413156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.413397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.413458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.413680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.413710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.413828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.413893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.414100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.414165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.414373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.414412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.414567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.414632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.414925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.414991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.415245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.415275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.415448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.415512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.415841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.415907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.416163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.416192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.416359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.416423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.416611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.416690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.416902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.416942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.417141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.417205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.417388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.417463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.417709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.417739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.417904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.417968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.418287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.418356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.418572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.418612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.418809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.418875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.419121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.419187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.419360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.419389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.419524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.419573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.419839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.419868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.420005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.420034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.420184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.420248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.420462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.420534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.420778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.420808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.420976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.421041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.421313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.421377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.421608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.421638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.421794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.421869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.422082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.422147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.422399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.422428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.972 [2024-10-08 21:05:02.422588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.972 [2024-10-08 21:05:02.422677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.972 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.422914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.422979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.423162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.423190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.423384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.423449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.423686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.423752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.423964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.423993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.424206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.424271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.424506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.424570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.424766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.424795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.424954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.425020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.425204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.425270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.425482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.425512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.425634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.425708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.426000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.426064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.426323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.426353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.426497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.426562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.426810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.426839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.427001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.427029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.427164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.427227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.427460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.427530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.427868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.427904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.428090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.428154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.428344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.428408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.428607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.428637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.428808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.428874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.429097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.429161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.429421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.429449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.429627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.429709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.429934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.429999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.430208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.430238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.430397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.430462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.430626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.430705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.431019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.431048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.431270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.431334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.431585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.431667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.431886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.431915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.432072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.432137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.432351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.973 [2024-10-08 21:05:02.432426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.973 qpair failed and we were unable to recover it. 00:37:33.973 [2024-10-08 21:05:02.432733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.432763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.432862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.432914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.433140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.433204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.433416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.433444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.433544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.433614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.433874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.433922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.434149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.434184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.434311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.434376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.434566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.434631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.434851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.434880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.435033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.435098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.435310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.435374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.435616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.435702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.435868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.435915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.436141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.436211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.436409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.436474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.436728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.436759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.436891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.436921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.437078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.437106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.437216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.437252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.437541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.437610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.437876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.437905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.438045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.438107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.438326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.438388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.438685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.438715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.438868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.438931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.439155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.439229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.439422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.439452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.439634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.439713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.439933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.439997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.440214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.440243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.440415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.440478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.440679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.440751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.441009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.441039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.441175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.441238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.441438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.441512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.441831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.441861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.442119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.442184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.442359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.442423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.974 qpair failed and we were unable to recover it. 00:37:33.974 [2024-10-08 21:05:02.442695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.974 [2024-10-08 21:05:02.442730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.442875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.442940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.443150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.443215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.443466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.443531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.443784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.443814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.443942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.444006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.444235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.444264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.444420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.444485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.444663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.444747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.444933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.444962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.445908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.445945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.446177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.446205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.446384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.446414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.446599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.446703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.446894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.446964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.447141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.447212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.447434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.447506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.447775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.447805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.447947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.448013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.448195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.448234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.448390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.448456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.448683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.448736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.448859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.448889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.448996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.449068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.449297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.449359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.449623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.449720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.449848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.449877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.450047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.450109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.975 [2024-10-08 21:05:02.450353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.975 [2024-10-08 21:05:02.450417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.975 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.450611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.450700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.450821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.450850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.451003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.451031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.451206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.451270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.451482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.451544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.451758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.451787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.451888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.451947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.452190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.452255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.452523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.452587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.452825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.452855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.453057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.453129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.453345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.453409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.453708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.453738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.453887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.453962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.454180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.454209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.454358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.454422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.454623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.454713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.454810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.454838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.454966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.455031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.455240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.455310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.455566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.455629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.455818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.455846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.455983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.456048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.456391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.456455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.456708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.456738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.456865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.456894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.457078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.457107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.457230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.457304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.457540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.457602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.457802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.457831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.457956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.458024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.458221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.458290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.458489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.458555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.458813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.458843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.458955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.458991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.459169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.459198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.459315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.459343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.459479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.459544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.459762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.459792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.459961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.976 [2024-10-08 21:05:02.460023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.976 qpair failed and we were unable to recover it. 00:37:33.976 [2024-10-08 21:05:02.460219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.460285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.460604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.460702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.460969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.461043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.461315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.461380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.461594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.461683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.461835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.461864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.462074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.462149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.462425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.462490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.462723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.462753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.462926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.462991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.463233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.463262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.463423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.463487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.463706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.463755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.463852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.463881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.464003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.464063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.464430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.464499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.464736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.464766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.464890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.464955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.465173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.465241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.465497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.465526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.465672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.465740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.465933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.465998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.466202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.466230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.466357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.466416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.466621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.466701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.466886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.466914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.467109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.467173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.467415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.467480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.467756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.467786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.467905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.467969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.468154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.468219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.468444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.468473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.468644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.468722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.468973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.469038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.469247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.469276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.469439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.469501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.469690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.469755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.469942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.469974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.470105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.977 [2024-10-08 21:05:02.470163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.977 qpair failed and we were unable to recover it. 00:37:33.977 [2024-10-08 21:05:02.470385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.470447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.470701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.470730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.470831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.470859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.471093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.471157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.471376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.471415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.471569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.471634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.471852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.471881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.472074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.472108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.472265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.472337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.472592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.472702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.473025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.473054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.473241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.473305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.473533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.473596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.473786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.473815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.473937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.473994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.474262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.474326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.474634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.474671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.474877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.474953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.475185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.475249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.475486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.475516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.475707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.475774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.476044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.476110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.476363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.476393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.476563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.476627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.476889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.476954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.477271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.477300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.477522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.477597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.477950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.478015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.478318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.478355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.478551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.478615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.478923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.478987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.479288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.479317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.479536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.479600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.479964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.480041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.480357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.480386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.480700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.480773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.481011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.481075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.481281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.481310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.481466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.481532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.481763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.481829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.978 [2024-10-08 21:05:02.482008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.978 [2024-10-08 21:05:02.482036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.978 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.482175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.482230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.482420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.482482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.482700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.482730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.482826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.482874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.483116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.483180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.483427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.483457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.483594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.483681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.483973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.484038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.484306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.484335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.484505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.484569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.484798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.484863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.485091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.485120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.485271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.485335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.485561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.485625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.485896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.485926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.486075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.486139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.486379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.486444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.486693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.486723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.486847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.486876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.487130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.487194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.487470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.487499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.487680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.487728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.487829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.487857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.487968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.487995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.488144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.488209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.488474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.488538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.488810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.488841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.489028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.489093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.489365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.489429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.489603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.489641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.489844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.489909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.490160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.490228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.490443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.490472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.490572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.490642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.490852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.490916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.491146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.491175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.491327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.491389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.491612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.491696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.979 qpair failed and we were unable to recover it. 00:37:33.979 [2024-10-08 21:05:02.491965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.979 [2024-10-08 21:05:02.491995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.492130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.492192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.492397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.492461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.492698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.492728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.492907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.492972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.493240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.493304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.493524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.493554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.493710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.493775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.493981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.494056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.494288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.494318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.494484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.494549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.494767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.494797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.494893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.494921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.495101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.495165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.495416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.495481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.495662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.495691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.495794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.495823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.496059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.496121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.496339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.496368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.496528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.496590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.496803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.496867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.497116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.497144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.497296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.497358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.497573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.497636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.497948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.497978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.498169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.498234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.498522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.498586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.498832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.498862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.498956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.499041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.499308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.499372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.499692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.499769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.500132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.500197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.500444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.500509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.500727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.500757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.500936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.501001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.501252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.501318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.980 [2024-10-08 21:05:02.501608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.980 [2024-10-08 21:05:02.501712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.980 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.501930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.501995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.502298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.502363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.502584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.502665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.502799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.502828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.503018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.503083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.503320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.503349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.503498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.503563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.503862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.503939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.504140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.504169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.504304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.504371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.504607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.504701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.504920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.504954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.505129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.505194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.505426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.505492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.505739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.505769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.505916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.505981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.506196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.506261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.506515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.506589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.506805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.506845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.506969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.507035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.507252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.507282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.507395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.507474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.507661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.507725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.507941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.507969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.508111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.508176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.508414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.508490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.508789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.508827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.509043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.509109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.509317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.509381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.509705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.509735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.509950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.510016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.510329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.510401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.510605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.510635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.510811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.510877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.511103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.511167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.511403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.511433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.511614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.511712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.511893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.511958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.512181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.512210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.512323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-10-08 21:05:02.512398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-10-08 21:05:02.512614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.512695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.512913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.512942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.513075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.513138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.513463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.513528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.513786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.513816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.513966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.514030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.514265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.514330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.514548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.514619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.514791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.514828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.514983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.515047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.515292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.515321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.515470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.515535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.515818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.515884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.516101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.516130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.516299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.516363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.516647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.516729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.516988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.517016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.517176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.517241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.517429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.517494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.517684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.517714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.517848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.517914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.518143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.518208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.518473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.518502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.518679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.518745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.519005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.519069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.519298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.519327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.519475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.519541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.519861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.519938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.520279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.520336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.520691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.520757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.520967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.521032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.521228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.521257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.521408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.521469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.521744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.521809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.522035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.522064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.522212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.522277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.522607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.522701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.522857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.522886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.523009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.523090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.523352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-10-08 21:05:02.523417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-10-08 21:05:02.523684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.523714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.523881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.523945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.524265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.524340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.524572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.524602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.524758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.524823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.525055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.525119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.525335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.525376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.525523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.525585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.525806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.525880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.526061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.526089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1866100 Killed "${NVMF_APP[@]}" "$@" 00:37:33.983 [2024-10-08 21:05:02.526270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.526332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.526674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.526742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:33.983 [2024-10-08 21:05:02.526969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.526998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:33.983 [2024-10-08 21:05:02.527180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.527245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:33.983 [2024-10-08 21:05:02.527437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.527502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:33.983 [2024-10-08 21:05:02.527737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.983 [2024-10-08 21:05:02.527767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.527948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.528023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.528264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.528329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.528497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.528526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.528703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.528769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.529021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.529085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.529377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.529407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.529539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.529615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.529788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.529817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.529973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.530001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.530192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.530265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.530559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.530624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.530918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.530947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.531158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.531223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.531461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.531528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.531729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.531759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.531861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.531890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.532070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.532132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.532389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.532418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.532549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.532612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.532913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.532976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.533247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.533276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-10-08 21:05:02.533407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-10-08 21:05:02.533470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.533648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.533735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1866658 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:33.984 [2024-10-08 21:05:02.533963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.533991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1866658 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.534151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.534212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1866658 ']' 00:37:33.984 [2024-10-08 21:05:02.534433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.534497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.984 [2024-10-08 21:05:02.534710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.534738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:33.984 [2024-10-08 21:05:02.534839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.534914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.984 [2024-10-08 21:05:02.535149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.535211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:33.984 21:05:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.984 [2024-10-08 21:05:02.535452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.535482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.535722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.535799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.536028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.536092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.536289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.536318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.536489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.536555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.536867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.536897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.537108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.537143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.537269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.537335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.537571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.537636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.537971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.538002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.538249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.538315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.538633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.538714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.539037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.539071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.539292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.539358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.539638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.539735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.539963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.540000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.540216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.540292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.540532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.540597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.540930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.541000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.541205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.541269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.541499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.541564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.541784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.541813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.541918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.541947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.542211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-10-08 21:05:02.542276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-10-08 21:05:02.542510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.542540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.542741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.542807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.543015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.543081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.543300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.543330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.543485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.543550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.543831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.543897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.544115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.544145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.544308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.544372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.544584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.544666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.544810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.544839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.544998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.545063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.545289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.545353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.545626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.545661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.545794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.545860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.546096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.546161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.546480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.546514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.546684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.546762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.547026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.547092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.547316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.547346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.547495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.547560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.547826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.547892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.548117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.548146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.548316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.548382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.548620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.548700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.548995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.549025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.549168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.549234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.549433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.549497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.549755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.549785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.549958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.550024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.550254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.550319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.550647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.550719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.550912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.550978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.551230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.551294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.551525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.551590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.551878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.551956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.552251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.552316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.552529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.552593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.552831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-10-08 21:05:02.552860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-10-08 21:05:02.553054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.553119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.553340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.553370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.553560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.553624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.553853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.553918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.554139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.554169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.554271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.554333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.554536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.554609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.554910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.554940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.555066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.555132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.555342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.555409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.555610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.555640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.555807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.555873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.556061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.556123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.556334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.556372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.556563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.556626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.556801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.556831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.557012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.557041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.557206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.557281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.557601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.557683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.557897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.557927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.558082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.558147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.558378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.558444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.558695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.558725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.558900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.558965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.559175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.559239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.559455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.559492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.559674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.559740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.559927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.559991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.560204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.560233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.560400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.560464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.560674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.560741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.560958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.560989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.561169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.561234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.561423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.561488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.561816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.561846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.562067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.562142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.562407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.562472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.562748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.562778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.562969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.563044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-10-08 21:05:02.563229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-10-08 21:05:02.563294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.563541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.563606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.563839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.563869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.564050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.564121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.564366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.564395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.564520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.564586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.564809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.564839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.564968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.564997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.565218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.565281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.565604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.565680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.565921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.565951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.566100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.566165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.566392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.566456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.566778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.566808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.566946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.567008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.567264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.567328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.567550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.567579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.567725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.567767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.567995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.568081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.568327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.568356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.568533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.568598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.568934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.568999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.569270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.569298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.569478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.569554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.569887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.569962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.570258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.570287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.570416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.570481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.570697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.570765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.571020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.571050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.571212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.571277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.571465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.571532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.571755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.571786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.571985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.572056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.572330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-10-08 21:05:02.572395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-10-08 21:05:02.572626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.572714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.572875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.572938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.573149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.573218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.573447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.573477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.573700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.573730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.573829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.573858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.573973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.574002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.574175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.574240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.574508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.574572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.574859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.574889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.575089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.575159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.575414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.575479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.575685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.575720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.575813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.575867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.576037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.576108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.576317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.576347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.576527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.576592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.576852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.576918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.577151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.577180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.577384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.577448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.577686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.577752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.577963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.577993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.578183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.578253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.578436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.578500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.578707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.578741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.578848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.578918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.579144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.579209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.579381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.579422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.579610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.579692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.579950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.580014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.580324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.580354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.580513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.580577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.580838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.580868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-10-08 21:05:02.581004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-10-08 21:05:02.581034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.581166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.581230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.581505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.581569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.581842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.581872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.581991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.582056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.582269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.582334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.582598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.582626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.582755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.582821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.583044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.583109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.583430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.583459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.583716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.583782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.583984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.584049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.584265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.584295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.584466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.584530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.584800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.584867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.585082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.585112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.585262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.585326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.585574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.585638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.585933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.585971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.586165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.586229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.586408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.586473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.586703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.586733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.586889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.586953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.587152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.587216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.587436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.587465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.587610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.587710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.587833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.587862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.587997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.588025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.588225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.588299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.588529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.588593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.588888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.588918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.589078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.589153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.589426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.589490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.589708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.589738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.589917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.589982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.590272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.590345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.590642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.590676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-10-08 21:05:02.590856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-10-08 21:05:02.590922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.591177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.591241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.591518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.591547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.591706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.591773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.592087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.592151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.592353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.592382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.592558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.592622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.592857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.592922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.593189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.593219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.593365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.593430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.593601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.593682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.593981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.594011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.594167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.594231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.594439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.594505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.594738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.594768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.594939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.595004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.595225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.595290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.595509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.595573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.595806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.595836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.595987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.596063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.596343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.596372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.596541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.596616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.596896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.596962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.597164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.597193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.597345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.597410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.597637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.597715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.597900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.597929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.598027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.598056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.598256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.598320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.598559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.598588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.598773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.598839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.599110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.599175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.599396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.599425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.599618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.599722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.599997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-10-08 21:05:02.600073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-10-08 21:05:02.600326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.600355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.600531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.600595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.600840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.600905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.601159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.601189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.601328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.601392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.601632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.601715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.601956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.601986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.602179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.602255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.602442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.602506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.602747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.602777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.602933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.602997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.603160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.603227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.603462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.603528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.603749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.603779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.603901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.603929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.604933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.604962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.605944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.605973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.606102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.606257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.606426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.606602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.606732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.606936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.607007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.607184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.607248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.607452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.607482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.607583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-10-08 21:05:02.607612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-10-08 21:05:02.607856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.607923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.608156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.608191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.608305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.608372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.608577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.608645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.608840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.608869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.609012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.609077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.609290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.609355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.609557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.609586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.609742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.609810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.609984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.610049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.610256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.610286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.610461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.610526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.610795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.610862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.611133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.611162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.611365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.611429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.611625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.611706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.611928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.611958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.612068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.612131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.612339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.612409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.612595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.612624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.612764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.612841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.613068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.613133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.613333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.613362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.613510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.613574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.613824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.613891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.614196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.614225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.614432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.614501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.614821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.614887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.615152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.615181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.615351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.615415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.615607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.615716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.615847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.615876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.616036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.616101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.616337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-10-08 21:05:02.616401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-10-08 21:05:02.616658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.616689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.616829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.616894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.617101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.617171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.617359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.617389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.617513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.617584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.617845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.617911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.618115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.618144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.618284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.618360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.618569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.618634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.618942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.618972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.619115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.619180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.619365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.619430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.619688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.619718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.619883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.619948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.620140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.620205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.620410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.620439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.620597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.620677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.620896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.620962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.621188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.621217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.621388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.621452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.621716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.621782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.622006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.622035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.622227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.622303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.622481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.622546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.622816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.622846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.622988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.623053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.623257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.623323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.623587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.623708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.623822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.623862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.624054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.624119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.624391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.624421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.624599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.624698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.624758] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:33.993 [2024-10-08 21:05:02.624803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.624831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b9[2024-10-08 21:05:02.624830] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:0 with addr=10.0.0.2, port=4420 00:37:33.993 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-10-08 21:05:02.625083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-10-08 21:05:02.625147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.625359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.625428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.625669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.625736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.625983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.626013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.626145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.626211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.626401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.626466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.626688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.626718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.626885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.626950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.627221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.627286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.627498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.627528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.627693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.627758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.627955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.628019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.628210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.628239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.628351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.628414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.628666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.628733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.628992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.629021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.629168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.629233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.629506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.629571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.629862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.629891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.630063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.630127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.630395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.630460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.630710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.630741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.630882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.630946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.631161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.631228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.631466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.631531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.631727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.631757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.631888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.631969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.632167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.632205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.632375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.632439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.632643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.632727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.632953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.632982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.633129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.633193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.633418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.633483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.633685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.633715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.633842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.633921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.634150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.634215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.634434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.634463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.634601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.634684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.634917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.634983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.635209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.635238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-10-08 21:05:02.635396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-10-08 21:05:02.635462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.635644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.635727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.635927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.635956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.636060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.636110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.636339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.636405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.636612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.636641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.636820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.636885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.637115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.637180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.637386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.637416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.637539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.637612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.637889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.637955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.638164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.638193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.638288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.638342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.638570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.638681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.638854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.638884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.639145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.639210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.639453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.639517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.639717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.639747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.639847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.639876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.640055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.640119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.640386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.640415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.640537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.640601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.640833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.640899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.641166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.641195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.641342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.641406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.641577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.641642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.641863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.641892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.641989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.642042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.642287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.642352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.642577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.642606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.642780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.642846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.643083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.643147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.643367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.643397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.643589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.643682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.643894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.643959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.644148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.644178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.644303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.644366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.644573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.644648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.644843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.644882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.645057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.645122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-10-08 21:05:02.645382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-10-08 21:05:02.645447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.645687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.645717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.645818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.645847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.646031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.646096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.646302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.646331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.646501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.646566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.646788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.646818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.646950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.646979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.647117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.647183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.647411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.647476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.647738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.647768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.647901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.647967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.648182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.648249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.648495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.648530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.648680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.648746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.648976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.649040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.649269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.649298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.649392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.649469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.649682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.649748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.649935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.649964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.650166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.650230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.650549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.650622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.650904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.650933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.651103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.651168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.651458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.651521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.651738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.651768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.651896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.651971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.652275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.652339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.652557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.652586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.652795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.652869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.653079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.653142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.653323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.653352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.653574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.653640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.653850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.653881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.654061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.654090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.654235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.654305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.654517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.654582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.654872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.654901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.655052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.655117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.655347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.655411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-10-08 21:05:02.655729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-10-08 21:05:02.655759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.656021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.656089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.656343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.656407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.656623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.656669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.656862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.656931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.657271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.657334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.657561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.657590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.657765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.657832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.658065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.658129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.658379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.658408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.658580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.658645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.658907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.658972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.659159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.659188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.659322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.659390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.659623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.659706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.659922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.659951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.660120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.660183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.660395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.660459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.660631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.660677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.660839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.660869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.661036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.661101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.661354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.661383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.661566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.661630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.661783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.661812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.661938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.661967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.662111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.662175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.662398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.662462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.662757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.662787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.662953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.663017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.663260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.663324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.663646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.663680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.663941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.664006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.664239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.664303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.664522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.664551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.664711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.664777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.665018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.665084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.665373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.665402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.665560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.665625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.665856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.665921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.666249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-10-08 21:05:02.666311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-10-08 21:05:02.666570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.666635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.666863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.666928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.667142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.667171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.667278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.667337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.667584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.667648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.667876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.667905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.668101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.668166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.668483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.668547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.668745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.668774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.668997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.669061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.669318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.669382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.669676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.669730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.669878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.669955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.670276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.670359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.670594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.670676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.670814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.670844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.671011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.671074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.671325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.671354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.671522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.671587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.671924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.671990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.672330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.672391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.672722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.672790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.673010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.673075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.673305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.673335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.673487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.673551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.673773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.673840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.674067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.674096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.674315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.674380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.674585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.674663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.674878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.674908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.675062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.675126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.675356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-10-08 21:05:02.675420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-10-08 21:05:02.675636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.675671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.675843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.675908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.676229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.676293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.676624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.676698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.676936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.677001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.677239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.677304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.677561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.677590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.677739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.677806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.678048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.678113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.678337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.678366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.678536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.678600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.678868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.678897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.679109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.679140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.679249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.679307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.679500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.679575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.679794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.679824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.680013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.680078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.680337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.680401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.680608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.680647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.680810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.680886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.681108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.681172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.681397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.681434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.681601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.681693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.681921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.681985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.682216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.682245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.682438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.682503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.682824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.682891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.683106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.683145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.683298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.683362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.683552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.683617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.683888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.683918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.684031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.684096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.684333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.684397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.684573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.684610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.684703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.684733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.684957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.685023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.685327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.685356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.685478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.685544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.685780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.685809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.685936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-10-08 21:05:02.685965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-10-08 21:05:02.686109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.686185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.686370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.686435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.686648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.686693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.686846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.686911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.687154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.687219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.687547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.687611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.687857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.687922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.688119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.688183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.688425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.688454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.688608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.688699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.688927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.688993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.689212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.689241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.689434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.689498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.689746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.689813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.690032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.690061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.690211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.690270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.690382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.690417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.690610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.690640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.690800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.690865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.691049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.691114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.691368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.691398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.691535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.691611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.691810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.691886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.692101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.692131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.692226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.692292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.692531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.692596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.692927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.692978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.693242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.693306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.693516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.693591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.693811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.693841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.693988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.694053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.694276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.694340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.694569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.694634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.694860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.694891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.695129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.695193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.695417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-10-08 21:05:02.695446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-10-08 21:05:02.695602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.695684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.695927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.695993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.696221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.696251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.696412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.696477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.696682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.696749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.697055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.697084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.697274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.697338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.697550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.697615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.697892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.697921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.698093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.698157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.698375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.698441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.698673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.698702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.698844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.698873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.699087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.699152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.699407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.699437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.699607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.699700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.699805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.699845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.699999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.700028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.700172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.700237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.700429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.700494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.700692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.700723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.700852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.700919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.701154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.701219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.701425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.701454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.701632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.701731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.701892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.701933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.702092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.702121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.702291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.702328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.702528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.702598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.702831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.702860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.703017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.703081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.703287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.703351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.703630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.703667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.703812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.703876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.704105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.704168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.704376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.704405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-10-08 21:05:02.704560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-10-08 21:05:02.704623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.704941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.705006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.705212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.705249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.705425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.705489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.705762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.705828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.706032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.706062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.706219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.706283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.706543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.706607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.706811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.706841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.707035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.707110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.707382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.707448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.707697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.707727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.707861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.707916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.708150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.708215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.708447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.708511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.708741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.708771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.708920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.708986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.709182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.709250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.709463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.709528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.709744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.709774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.709927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.709992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.710179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.710244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.710463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.710531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.710755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.710785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.710879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.710908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.711080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.711115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.711276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.711320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.711538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.711577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.711695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.711731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.711884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.711961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.712190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.712255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.712559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.712635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.712948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.713014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.713227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.713294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.713570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-10-08 21:05:02.713635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-10-08 21:05:02.713905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.713934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.714100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.714136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.714311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.714386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.714663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.714724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.714821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.714851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.714975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.715038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.715247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.715312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.715524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.715591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-10-08 21:05:02.715864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-10-08 21:05:02.715893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.716038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.716074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.716247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.716284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.716420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.716455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.716600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.716629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.716824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.716861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.717057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.717103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.717226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.717262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.717442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.717471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.717627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.717663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.717807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.717843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.718858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.718887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.719970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.719999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.720149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.720178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.720274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.720303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.265 [2024-10-08 21:05:02.720433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.265 [2024-10-08 21:05:02.720462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.265 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.720695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.720773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.720987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.721052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.721254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.721283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.721433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.721498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.721817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.721846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.721988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.722052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.722249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.722278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.722394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.722445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.722667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.722735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.722984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.723049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.723333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.723362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.723509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.723574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.723797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.723870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.724078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.724145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.724385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.724415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.724570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.724636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.724917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.724982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.725191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.725256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.725527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.725556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.725749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.725817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.726132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.726204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.726516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.726580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.726778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.726807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.726933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.726997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.727237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.727302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.727565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.727630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.727901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.727931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.728117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.728193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.728412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.728478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.728800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.728866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.729133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.729162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.729332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.729398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.729584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.729666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.729961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.730025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.730239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.730268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.730468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.730532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.730854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.730926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.731268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.731333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.731603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.266 [2024-10-08 21:05:02.731632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.266 qpair failed and we were unable to recover it. 00:37:34.266 [2024-10-08 21:05:02.731801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.731866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.732062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.732143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.732391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.732457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.732642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.732678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.732812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.732887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.733155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.733219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.733427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.733491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.733737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.733767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.733910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.733975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.734257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.734321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.734583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.734648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.734955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.734985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.735176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.735240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.735511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.735575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.735859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.735926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.736144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.736173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.736346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.736412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.736642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.736721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.736955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.737020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.737224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.737253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.737410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.737476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.737701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.737768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.737989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.738054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.738374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.738404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.738600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.738701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.738915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.738985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.739218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.739282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.739545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.739574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.739744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.739812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.740029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.740094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.740299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.740365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.740610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.740691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.740815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.740844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.741079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.741181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.741520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.741594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.741922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.741981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.742195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.742267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.742475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.742544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.742820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.742850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.267 [2024-10-08 21:05:02.742979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.267 [2024-10-08 21:05:02.743008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.267 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.743183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.743248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.743491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.743557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.743824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.743854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.744033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.744062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.744201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.744266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.744558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.744678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.744861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.744892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.745029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.745059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.745292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.745358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.745569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.745636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.745873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.745939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.746204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.746233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.746391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.746456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.746626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.746735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.746943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.747008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.747225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.747255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.747428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.747494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.747794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.747895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.748245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.748315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.748560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.748590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.748734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.748801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.749023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.749091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.749351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.749417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.749675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.749706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.749847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.749921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.750197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.750262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.750493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.750559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.750825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.750855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.750945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.751030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.751351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.751452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.751766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.751837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.752038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.752067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.752207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.752282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.752479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.752547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.752777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.752807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.752936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.752965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.753163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.753235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.753497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.268 [2024-10-08 21:05:02.753562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.268 qpair failed and we were unable to recover it. 00:37:34.268 [2024-10-08 21:05:02.753768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.753798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.753925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.753954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.754056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.754120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.754378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.754481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.754827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.754903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.755144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.755173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.755313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.755378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.755596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.755690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.755950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.756022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.756248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.756278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.756399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.756465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.756667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.756771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.756994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.757023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.757254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.757284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.757406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.757471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.757752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.757819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.758023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.758041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:34.269 [2024-10-08 21:05:02.758090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.758402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.758432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.758544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.758610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.758907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.758974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.759179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.759243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.759447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.759483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.759668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.759736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.759965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.760030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.760241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.760313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.760480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.760510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.760638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.760673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.760901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.760966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.761156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.761220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.761439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.761469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.761626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.761719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.762057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.762121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.762368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.762434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.269 qpair failed and we were unable to recover it. 00:37:34.269 [2024-10-08 21:05:02.762647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.269 [2024-10-08 21:05:02.762725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.762881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.762941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.763152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.763226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.763405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.763471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.763731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.763761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.763862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.763925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.764233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.764341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.764698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.764730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.764831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.764861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.765019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.765085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.765286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.765354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.765690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.765757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.765967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.766004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.766183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.766249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.766528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.766593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.766899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.766929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.767114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.767145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.767284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.767341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.767542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.767619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.767905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.767971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.768288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.768318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.768511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.768546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.768710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.768748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.768921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.768987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.769218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.769248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.769405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.769471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.769789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.769893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.770153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.770222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.770432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.770461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.770610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.770696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.770914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.770979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.771171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.771236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.771444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.771473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.771572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.771621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.771974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.772077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.772415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.772484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.772715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.772755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.772902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.772982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.773283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.773351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.270 [2024-10-08 21:05:02.773630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.270 [2024-10-08 21:05:02.773711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.270 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.773901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.773931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.774057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.774125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.774348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.774417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.774638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.774730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.774951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.774980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.775126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.775192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.775363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.775427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.775668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.775734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.775920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.775950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.776127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.776190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.776378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.776453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.776727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.776757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.776862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.776891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.777011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.777070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.777322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.777423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.777736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.777808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.778110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.778148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.778322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.778399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.778613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.778695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.778911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.778976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.779210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.779239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.779423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.779489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.779677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.779749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.779938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.780002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.780224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.780258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.780397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.780460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.780700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.780767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.780962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.781028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.781202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.781230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.781382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.781457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.781668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.781731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.781940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.782002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.782174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.782202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.782329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.782359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.782630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.782782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.783066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.783136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.783464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.783494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.783684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.783752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f944c000b90 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.783995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.271 [2024-10-08 21:05:02.784063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.271 qpair failed and we were unable to recover it. 00:37:34.271 [2024-10-08 21:05:02.784302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.784368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.784572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.784601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.784762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.784828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.785051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.785116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.785393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.785459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.785701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.785732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.785862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.785912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.786154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.786219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.786408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.786472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 [2024-10-08 21:05:02.786679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.786715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5630 with addr=10.0.0.2, port=4420 00:37:34.272 qpair failed and we were unable to recover it. 00:37:34.272 A controller has encountered a failure and is being reset. 00:37:34.272 [2024-10-08 21:05:02.786925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.272 [2024-10-08 21:05:02.787066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d35f0 with addr=10.0.0.2, port=4420 00:37:34.272 [2024-10-08 21:05:02.787121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d35f0 is same with the state(6) to be set 00:37:34.272 [2024-10-08 21:05:02.787189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d35f0 (9): Bad file descriptor 00:37:34.272 [2024-10-08 21:05:02.787236] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.272 [2024-10-08 21:05:02.787286] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.272 [2024-10-08 21:05:02.787327] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.272 Unable to reset the controller. 00:37:34.272 [2024-10-08 21:05:02.892893] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.272 [2024-10-08 21:05:02.892975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.272 [2024-10-08 21:05:02.892993] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.272 [2024-10-08 21:05:02.893007] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.272 [2024-10-08 21:05:02.893031] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.272 [2024-10-08 21:05:02.895019] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:37:34.272 [2024-10-08 21:05:02.895074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:37:34.272 [2024-10-08 21:05:02.895127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:37:34.272 [2024-10-08 21:05:02.895131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:37:35.205 [2024-10-08 21:05:03.787825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.205 [2024-10-08 21:05:03.787907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d35f0 with addr=10.0.0.2, port=4420 00:37:35.205 [2024-10-08 21:05:03.787936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d35f0 is same with the state(6) to be set 00:37:35.205 [2024-10-08 21:05:03.787977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d35f0 (9): Bad file descriptor 00:37:35.205 [2024-10-08 21:05:03.787997] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.205 [2024-10-08 21:05:03.788012] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.205 [2024-10-08 21:05:03.788031] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.205 Unable to reset the controller. 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 Malloc0 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 [2024-10-08 21:05:04.060444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 [2024-10-08 21:05:04.088694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.463 21:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1866248 00:37:36.395 Controller properly reset. 00:37:40.577 Initializing NVMe Controllers 00:37:40.577 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:40.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:40.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:40.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:40.577 Initialization complete. Launching workers. 00:37:40.577 Starting thread on core 1 00:37:40.577 Starting thread on core 2 00:37:40.577 Starting thread on core 3 00:37:40.577 Starting thread on core 0 00:37:40.577 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:40.577 00:37:40.577 real 0m10.897s 00:37:40.577 user 0m34.012s 00:37:40.577 sys 0m8.523s 00:37:40.577 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:40.577 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.577 ************************************ 00:37:40.578 END TEST nvmf_target_disconnect_tc2 00:37:40.578 ************************************ 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.578 rmmod nvme_tcp 00:37:40.578 rmmod nvme_fabrics 00:37:40.578 rmmod nvme_keyring 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1866658 ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1866658 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1866658 ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1866658 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1866658 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1866658' 00:37:40.578 killing process with pid 1866658 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1866658 00:37:40.578 21:05:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1866658 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.578 21:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:43.126 00:37:43.126 real 0m17.179s 00:37:43.126 user 0m59.645s 00:37:43.126 sys 0m11.822s 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:43.126 ************************************ 00:37:43.126 END TEST nvmf_target_disconnect 00:37:43.126 ************************************ 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:43.126 00:37:43.126 real 6m39.612s 00:37:43.126 user 14m6.244s 00:37:43.126 sys 1m42.204s 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.126 21:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.126 ************************************ 00:37:43.126 END TEST nvmf_host 00:37:43.126 ************************************ 00:37:43.126 21:05:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:43.126 21:05:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:43.126 21:05:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:43.126 21:05:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:43.126 21:05:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.126 21:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.126 ************************************ 00:37:43.126 START TEST nvmf_target_core_interrupt_mode 00:37:43.126 ************************************ 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:43.126 * Looking for test storage... 00:37:43.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.126 --rc genhtml_branch_coverage=1 00:37:43.126 --rc genhtml_function_coverage=1 00:37:43.126 --rc genhtml_legend=1 00:37:43.126 --rc geninfo_all_blocks=1 00:37:43.126 --rc geninfo_unexecuted_blocks=1 00:37:43.126 00:37:43.126 ' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.126 --rc genhtml_branch_coverage=1 00:37:43.126 --rc genhtml_function_coverage=1 00:37:43.126 --rc genhtml_legend=1 00:37:43.126 --rc geninfo_all_blocks=1 00:37:43.126 --rc geninfo_unexecuted_blocks=1 00:37:43.126 00:37:43.126 ' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.126 --rc genhtml_branch_coverage=1 00:37:43.126 --rc genhtml_function_coverage=1 00:37:43.126 --rc genhtml_legend=1 00:37:43.126 --rc geninfo_all_blocks=1 00:37:43.126 --rc geninfo_unexecuted_blocks=1 00:37:43.126 00:37:43.126 ' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.126 --rc genhtml_branch_coverage=1 00:37:43.126 --rc genhtml_function_coverage=1 00:37:43.126 --rc genhtml_legend=1 00:37:43.126 --rc geninfo_all_blocks=1 00:37:43.126 --rc geninfo_unexecuted_blocks=1 00:37:43.126 00:37:43.126 ' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.126 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:43.127 ************************************ 00:37:43.127 START TEST nvmf_abort 00:37:43.127 ************************************ 00:37:43.127 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:43.386 * Looking for test storage... 00:37:43.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:43.386 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:43.386 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:37:43.386 21:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:43.386 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.387 --rc genhtml_branch_coverage=1 00:37:43.387 --rc genhtml_function_coverage=1 00:37:43.387 --rc genhtml_legend=1 00:37:43.387 --rc geninfo_all_blocks=1 00:37:43.387 --rc geninfo_unexecuted_blocks=1 00:37:43.387 00:37:43.387 ' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.387 --rc genhtml_branch_coverage=1 00:37:43.387 --rc genhtml_function_coverage=1 00:37:43.387 --rc genhtml_legend=1 00:37:43.387 --rc geninfo_all_blocks=1 00:37:43.387 --rc geninfo_unexecuted_blocks=1 00:37:43.387 00:37:43.387 ' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.387 --rc genhtml_branch_coverage=1 00:37:43.387 --rc genhtml_function_coverage=1 00:37:43.387 --rc genhtml_legend=1 00:37:43.387 --rc geninfo_all_blocks=1 00:37:43.387 --rc geninfo_unexecuted_blocks=1 00:37:43.387 00:37:43.387 ' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:43.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.387 --rc genhtml_branch_coverage=1 00:37:43.387 --rc genhtml_function_coverage=1 00:37:43.387 --rc genhtml_legend=1 00:37:43.387 --rc geninfo_all_blocks=1 00:37:43.387 --rc geninfo_unexecuted_blocks=1 00:37:43.387 00:37:43.387 ' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.387 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:43.647 21:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:46.187 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:46.187 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:46.187 Found net devices under 0000:84:00.0: cvl_0_0 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:46.187 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:46.188 Found net devices under 0000:84:00.1: cvl_0_1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:46.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:46.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:37:46.188 00:37:46.188 --- 10.0.0.2 ping statistics --- 00:37:46.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.188 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:46.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:46.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:46.188 00:37:46.188 --- 10.0.0.1 ping statistics --- 00:37:46.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.188 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1869610 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1869610 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1869610 ']' 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:46.188 21:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.448 [2024-10-08 21:05:15.045475] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:46.448 [2024-10-08 21:05:15.048289] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:46.448 [2024-10-08 21:05:15.048418] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.448 [2024-10-08 21:05:15.209496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:46.708 [2024-10-08 21:05:15.422809] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.708 [2024-10-08 21:05:15.422925] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.708 [2024-10-08 21:05:15.422963] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.708 [2024-10-08 21:05:15.422993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.708 [2024-10-08 21:05:15.423020] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.708 [2024-10-08 21:05:15.425114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.708 [2024-10-08 21:05:15.425211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:46.708 [2024-10-08 21:05:15.425216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.969 [2024-10-08 21:05:15.599491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:46.969 [2024-10-08 21:05:15.599741] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:46.969 [2024-10-08 21:05:15.599746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:46.969 [2024-10-08 21:05:15.600092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.969 [2024-10-08 21:05:15.682422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.969 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.228 Malloc0 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.228 Delay0 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.228 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.229 [2024-10-08 21:05:15.762391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.229 21:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:47.229 [2024-10-08 21:05:15.864633] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:49.762 Initializing NVMe Controllers 00:37:49.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:49.762 controller IO queue size 128 less than required 00:37:49.762 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:49.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:49.763 Initialization complete. Launching workers. 00:37:49.763 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24992 00:37:49.763 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25049, failed to submit 66 00:37:49.763 success 24992, unsuccessful 57, failed 0 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.763 21:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.763 rmmod nvme_tcp 00:37:49.763 rmmod nvme_fabrics 00:37:49.763 rmmod nvme_keyring 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1869610 ']' 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1869610 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1869610 ']' 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1869610 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869610 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869610' 00:37:49.763 killing process with pid 1869610 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1869610 00:37:49.763 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1869610 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:50.024 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:37:50.025 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.025 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.025 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.025 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.025 21:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.935 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.935 00:37:51.935 real 0m8.738s 00:37:51.935 user 0m10.299s 00:37:51.935 sys 0m3.697s 00:37:51.935 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.935 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.935 ************************************ 00:37:51.936 END TEST nvmf_abort 00:37:51.936 ************************************ 00:37:51.936 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:51.936 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:51.936 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.936 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.936 ************************************ 00:37:51.936 START TEST nvmf_ns_hotplug_stress 00:37:51.936 ************************************ 00:37:51.936 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:52.197 * Looking for test storage... 00:37:52.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.197 --rc genhtml_branch_coverage=1 00:37:52.197 --rc genhtml_function_coverage=1 00:37:52.197 --rc genhtml_legend=1 00:37:52.197 --rc geninfo_all_blocks=1 00:37:52.197 --rc geninfo_unexecuted_blocks=1 00:37:52.197 00:37:52.197 ' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.197 --rc genhtml_branch_coverage=1 00:37:52.197 --rc genhtml_function_coverage=1 00:37:52.197 --rc genhtml_legend=1 00:37:52.197 --rc geninfo_all_blocks=1 00:37:52.197 --rc geninfo_unexecuted_blocks=1 00:37:52.197 00:37:52.197 ' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.197 --rc genhtml_branch_coverage=1 00:37:52.197 --rc genhtml_function_coverage=1 00:37:52.197 --rc genhtml_legend=1 00:37:52.197 --rc geninfo_all_blocks=1 00:37:52.197 --rc geninfo_unexecuted_blocks=1 00:37:52.197 00:37:52.197 ' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.197 --rc genhtml_branch_coverage=1 00:37:52.197 --rc genhtml_function_coverage=1 00:37:52.197 --rc genhtml_legend=1 00:37:52.197 --rc geninfo_all_blocks=1 00:37:52.197 --rc geninfo_unexecuted_blocks=1 00:37:52.197 00:37:52.197 ' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.197 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:52.198 21:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:55.491 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.491 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.491 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.491 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:55.492 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:55.492 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:55.492 Found net devices under 0000:84:00.0: cvl_0_0 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:55.492 Found net devices under 0000:84:00.1: cvl_0_1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.492 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:37:55.493 00:37:55.493 --- 10.0.0.2 ping statistics --- 00:37:55.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.493 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:37:55.493 00:37:55.493 --- 10.0.0.1 ping statistics --- 00:37:55.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.493 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1871978 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1871978 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1871978 ']' 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.493 21:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:55.493 [2024-10-08 21:05:23.906541] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:55.493 [2024-10-08 21:05:23.909264] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:55.493 [2024-10-08 21:05:23.909378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.493 [2024-10-08 21:05:24.071570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:55.756 [2024-10-08 21:05:24.291023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.756 [2024-10-08 21:05:24.291125] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.756 [2024-10-08 21:05:24.291161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.756 [2024-10-08 21:05:24.291202] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.756 [2024-10-08 21:05:24.291229] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.756 [2024-10-08 21:05:24.293325] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:55.756 [2024-10-08 21:05:24.293433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:55.756 [2024-10-08 21:05:24.293438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.756 [2024-10-08 21:05:24.475469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:55.756 [2024-10-08 21:05:24.475717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:55.756 [2024-10-08 21:05:24.475735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.756 [2024-10-08 21:05:24.476062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:56.696 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:56.956 [2024-10-08 21:05:25.662352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.956 21:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:57.896 21:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.155 [2024-10-08 21:05:26.750985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.155 21:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:58.722 21:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:58.982 Malloc0 00:37:58.982 21:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:59.242 Delay0 00:37:59.242 21:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.808 21:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:00.373 NULL1 00:38:00.373 21:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:00.939 21:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1872654 00:38:00.939 21:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:00.939 21:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:00.939 21:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:02.312 Read completed with error (sct=0, sc=11) 00:38:02.312 21:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.570 21:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:02.570 21:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:03.135 true 00:38:03.135 21:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:03.135 21:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.701 21:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.973 21:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:03.973 21:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:04.578 true 00:38:04.578 21:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:04.578 21:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:05.144 21:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:05.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:05.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:05.710 21:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:05.710 21:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:05.968 true 00:38:05.968 21:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:05.968 21:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.388 21:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.646 21:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:07.646 21:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:08.211 true 00:38:08.211 21:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:08.211 21:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.777 21:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.035 21:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:09.035 21:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:09.293 true 00:38:09.293 21:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:09.293 21:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.225 21:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.483 21:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:10.483 21:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:10.740 true 00:38:10.740 21:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:10.740 21:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.674 21:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.931 21:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:11.931 21:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:12.496 true 00:38:12.496 21:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:12.496 21:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.870 21:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.870 21:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:13.870 21:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:14.435 true 00:38:14.435 21:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:14.435 21:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.692 21:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.950 21:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:14.950 21:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:15.515 true 00:38:15.515 21:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:15.515 21:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.773 21:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.030 21:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:16.030 21:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:16.288 true 00:38:16.288 21:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:16.288 21:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.853 21:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:16.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:16.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:16.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:16.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.111 21:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:17.111 21:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:17.369 true 00:38:17.369 21:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:17.369 21:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 21:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.561 21:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:18.561 21:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:19.126 true 00:38:19.126 21:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:19.126 21:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.692 21:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.950 21:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:19.950 21:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:20.515 true 00:38:20.515 21:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:20.515 21:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.080 21:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.338 21:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:21.338 21:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:21.596 true 00:38:21.596 21:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:21.597 21:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:22.970 21:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.486 21:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:23.486 21:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:23.744 true 00:38:23.744 21:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:23.744 21:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 21:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.936 21:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:24.936 21:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:25.194 true 00:38:25.194 21:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:25.194 21:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:25.760 21:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.276 21:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:26.276 21:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:26.534 true 00:38:26.534 21:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:26.534 21:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.792 21:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.357 21:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:27.357 21:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:27.922 true 00:38:27.922 21:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:27.922 21:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 21:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.553 21:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:29.553 21:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:30.120 true 00:38:30.120 21:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:30.120 21:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.742 21:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.999 21:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:30.999 21:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:31.257 true 00:38:31.514 21:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:31.514 21:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.079 Initializing NVMe Controllers 00:38:32.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:32.079 Controller IO queue size 128, less than required. 00:38:32.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:32.079 Controller IO queue size 128, less than required. 00:38:32.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:32.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:32.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:32.079 Initialization complete. Launching workers. 00:38:32.079 ======================================================== 00:38:32.079 Latency(us) 00:38:32.079 Device Information : IOPS MiB/s Average min max 00:38:32.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4492.20 2.19 19889.48 2371.38 1092216.19 00:38:32.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13310.20 6.50 9616.47 2432.87 538405.47 00:38:32.079 ======================================================== 00:38:32.079 Total : 17802.40 8.69 12208.73 2371.38 1092216.19 00:38:32.079 00:38:32.079 21:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.644 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:32.644 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:32.644 true 00:38:32.901 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1872654 00:38:32.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1872654) - No such process 00:38:32.901 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1872654 00:38:32.901 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.158 21:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.416 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:33.416 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:33.416 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:33.416 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:33.416 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:33.984 null0 00:38:33.984 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:33.984 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:33.984 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:34.243 null1 00:38:34.243 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.243 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.243 21:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:34.811 null2 00:38:34.811 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.811 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.811 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:35.070 null3 00:38:35.070 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.070 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.070 21:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:35.329 null4 00:38:35.329 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.329 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.329 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:35.896 null5 00:38:35.896 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.896 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.896 21:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:36.464 null6 00:38:36.464 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:36.464 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:36.464 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:37.033 null7 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1876889 1876890 1876892 1876893 1876896 1876898 1876899 1876902 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.033 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:37.292 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:37.292 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:37.292 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:37.292 21:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:37.292 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:37.292 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.292 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:37.292 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.550 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.808 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:37.809 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.067 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:38.327 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.327 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.327 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.586 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.844 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.103 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.361 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:39.361 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.361 21:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:39.361 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:39.361 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.620 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.878 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.878 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.878 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.878 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.136 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.394 21:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.395 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:40.395 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:40.395 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:40.653 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:40.653 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.653 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:40.653 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:40.653 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.912 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.170 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.171 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.429 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.429 21:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.429 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.686 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.944 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.202 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.460 21:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.460 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.460 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.460 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.460 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.718 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.718 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.718 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.718 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.719 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.977 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.978 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.978 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.236 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.237 21:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.497 rmmod nvme_tcp 00:38:43.497 rmmod nvme_fabrics 00:38:43.497 rmmod nvme_keyring 00:38:43.497 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1871978 ']' 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1871978 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1871978 ']' 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1871978 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871978 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871978' 00:38:43.756 killing process with pid 1871978 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1871978 00:38:43.756 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1871978 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:38:44.016 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:44.277 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:44.277 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:44.277 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.277 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:44.277 21:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.190 00:38:46.190 real 0m54.176s 00:38:46.190 user 3m38.204s 00:38:46.190 sys 0m25.293s 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:46.190 ************************************ 00:38:46.190 END TEST nvmf_ns_hotplug_stress 00:38:46.190 ************************************ 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:46.190 ************************************ 00:38:46.190 START TEST nvmf_delete_subsystem 00:38:46.190 ************************************ 00:38:46.190 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:46.450 * Looking for test storage... 00:38:46.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.451 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:46.451 21:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:46.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.451 --rc genhtml_branch_coverage=1 00:38:46.451 --rc genhtml_function_coverage=1 00:38:46.451 --rc genhtml_legend=1 00:38:46.451 --rc geninfo_all_blocks=1 00:38:46.451 --rc geninfo_unexecuted_blocks=1 00:38:46.451 00:38:46.451 ' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:46.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.451 --rc genhtml_branch_coverage=1 00:38:46.451 --rc genhtml_function_coverage=1 00:38:46.451 --rc genhtml_legend=1 00:38:46.451 --rc geninfo_all_blocks=1 00:38:46.451 --rc geninfo_unexecuted_blocks=1 00:38:46.451 00:38:46.451 ' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:46.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.451 --rc genhtml_branch_coverage=1 00:38:46.451 --rc genhtml_function_coverage=1 00:38:46.451 --rc genhtml_legend=1 00:38:46.451 --rc geninfo_all_blocks=1 00:38:46.451 --rc geninfo_unexecuted_blocks=1 00:38:46.451 00:38:46.451 ' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:46.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.451 --rc genhtml_branch_coverage=1 00:38:46.451 --rc genhtml_function_coverage=1 00:38:46.451 --rc genhtml_legend=1 00:38:46.451 --rc geninfo_all_blocks=1 00:38:46.451 --rc geninfo_unexecuted_blocks=1 00:38:46.451 00:38:46.451 ' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.451 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.452 21:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.748 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.748 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.748 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.748 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.748 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:49.749 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:49.749 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:49.749 Found net devices under 0000:84:00.0: cvl_0_0 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:49.749 Found net devices under 0000:84:00.1: cvl_0_1 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.749 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.750 21:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:38:49.750 00:38:49.750 --- 10.0.0.2 ping statistics --- 00:38:49.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.750 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:38:49.750 00:38:49.750 --- 10.0.0.1 ping statistics --- 00:38:49.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.750 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1880415 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1880415 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1880415 ']' 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:49.750 21:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.750 [2024-10-08 21:06:18.228013] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.750 [2024-10-08 21:06:18.230808] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:38:49.750 [2024-10-08 21:06:18.230932] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.750 [2024-10-08 21:06:18.394787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:50.012 [2024-10-08 21:06:18.611316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.012 [2024-10-08 21:06:18.611363] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.012 [2024-10-08 21:06:18.611380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.012 [2024-10-08 21:06:18.611394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.012 [2024-10-08 21:06:18.611406] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.012 [2024-10-08 21:06:18.612127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.012 [2024-10-08 21:06:18.612134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.012 [2024-10-08 21:06:18.773973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:50.012 [2024-10-08 21:06:18.774054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:50.012 [2024-10-08 21:06:18.774574] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 [2024-10-08 21:06:19.889052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 [2024-10-08 21:06:19.929307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 NULL1 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 Delay0 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:51.391 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.392 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:51.392 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1880582 00:38:51.392 21:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:51.392 [2024-10-08 21:06:20.038961] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:53.294 21:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:53.294 21:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.294 21:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 [2024-10-08 21:06:22.250001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f552000d320 is same with the state(6) to be set 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 starting I/O failed: -6 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 [2024-10-08 21:06:22.250798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b750 is same with the state(6) to be set 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Write completed with error (sct=0, sc=8) 00:38:53.553 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Write completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:53.554 Read completed with error (sct=0, sc=8) 00:38:54.487 [2024-10-08 21:06:23.228128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ca70 is same with the state(6) to be set 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Write completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Write completed with error (sct=0, sc=8) 00:38:54.487 Write completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 Read completed with error (sct=0, sc=8) 00:38:54.487 [2024-10-08 21:06:23.250097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f552000d650 is same with the state(6) to be set 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 [2024-10-08 21:06:23.252820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f552000cff0 is same with the state(6) to be set 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 [2024-10-08 21:06:23.253573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b930 is same with the state(6) to be set 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Write completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.747 Read completed with error (sct=0, sc=8) 00:38:54.748 Write completed with error (sct=0, sc=8) 00:38:54.748 Read completed with error (sct=0, sc=8) 00:38:54.748 Read completed with error (sct=0, sc=8) 00:38:54.748 Read completed with error (sct=0, sc=8) 00:38:54.748 [2024-10-08 21:06:23.254450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b570 is same with the state(6) to be set 00:38:54.748 Initializing NVMe Controllers 00:38:54.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:54.748 Controller IO queue size 128, less than required. 00:38:54.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:54.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:54.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:54.748 Initialization complete. Launching workers. 00:38:54.748 ======================================================== 00:38:54.748 Latency(us) 00:38:54.748 Device Information : IOPS MiB/s Average min max 00:38:54.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.20 0.08 945645.03 388.82 2001934.75 00:38:54.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.80 0.07 934607.62 433.84 1044443.72 00:38:54.748 ======================================================== 00:38:54.748 Total : 318.00 0.16 940341.56 388.82 2001934.75 00:38:54.748 00:38:54.748 [2024-10-08 21:06:23.255217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3ca70 (9): Bad file descriptor 00:38:54.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:54.748 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.748 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:54.748 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1880582 00:38:54.748 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1880582 00:38:55.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1880582) - No such process 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1880582 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1880582 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1880582 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.008 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:55.268 [2024-10-08 21:06:23.781302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1881094 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:55.268 21:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:55.268 [2024-10-08 21:06:23.855823] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:55.836 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:55.836 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:55.836 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:56.152 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:56.152 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:56.152 21:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:56.750 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:56.750 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:56.750 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.316 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:57.316 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:57.316 21:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.574 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:57.574 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:57.574 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:58.139 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.139 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:58.139 21:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:58.399 Initializing NVMe Controllers 00:38:58.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:58.399 Controller IO queue size 128, less than required. 00:38:58.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:58.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:58.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:58.399 Initialization complete. Launching workers. 00:38:58.399 ======================================================== 00:38:58.399 Latency(us) 00:38:58.399 Device Information : IOPS MiB/s Average min max 00:38:58.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004686.13 1000297.64 1013184.38 00:38:58.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005048.41 1000180.34 1011753.82 00:38:58.399 ======================================================== 00:38:58.399 Total : 256.00 0.12 1004867.27 1000180.34 1013184.38 00:38:58.399 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1881094 00:38:58.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1881094) - No such process 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1881094 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.659 rmmod nvme_tcp 00:38:58.659 rmmod nvme_fabrics 00:38:58.659 rmmod nvme_keyring 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1880415 ']' 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1880415 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1880415 ']' 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1880415 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:58.659 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1880415 00:38:58.919 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:58.919 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:58.919 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1880415' 00:38:58.919 killing process with pid 1880415 00:38:58.919 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1880415 00:38:58.919 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1880415 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.179 21:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.719 00:39:01.719 real 0m15.032s 00:39:01.719 user 0m26.042s 00:39:01.719 sys 0m5.213s 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.719 ************************************ 00:39:01.719 END TEST nvmf_delete_subsystem 00:39:01.719 ************************************ 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:01.719 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:01.720 21:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:01.720 ************************************ 00:39:01.720 START TEST nvmf_host_management 00:39:01.720 ************************************ 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:01.720 * Looking for test storage... 00:39:01.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.720 --rc genhtml_branch_coverage=1 00:39:01.720 --rc genhtml_function_coverage=1 00:39:01.720 --rc genhtml_legend=1 00:39:01.720 --rc geninfo_all_blocks=1 00:39:01.720 --rc geninfo_unexecuted_blocks=1 00:39:01.720 00:39:01.720 ' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.720 --rc genhtml_branch_coverage=1 00:39:01.720 --rc genhtml_function_coverage=1 00:39:01.720 --rc genhtml_legend=1 00:39:01.720 --rc geninfo_all_blocks=1 00:39:01.720 --rc geninfo_unexecuted_blocks=1 00:39:01.720 00:39:01.720 ' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.720 --rc genhtml_branch_coverage=1 00:39:01.720 --rc genhtml_function_coverage=1 00:39:01.720 --rc genhtml_legend=1 00:39:01.720 --rc geninfo_all_blocks=1 00:39:01.720 --rc geninfo_unexecuted_blocks=1 00:39:01.720 00:39:01.720 ' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.720 --rc genhtml_branch_coverage=1 00:39:01.720 --rc genhtml_function_coverage=1 00:39:01.720 --rc genhtml_legend=1 00:39:01.720 --rc geninfo_all_blocks=1 00:39:01.720 --rc geninfo_unexecuted_blocks=1 00:39:01.720 00:39:01.720 ' 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.720 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.721 21:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:04.258 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:04.259 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:04.259 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:04.259 Found net devices under 0000:84:00.0: cvl_0_0 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:04.259 21:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:04.259 Found net devices under 0000:84:00.1: cvl_0_1 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:04.259 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:04.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:04.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:39:04.520 00:39:04.520 --- 10.0.0.2 ping statistics --- 00:39:04.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.520 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:04.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:04.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:39:04.520 00:39:04.520 --- 10.0.0.1 ping statistics --- 00:39:04.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.520 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1883575 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1883575 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1883575 ']' 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:04.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:04.520 21:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:04.520 [2024-10-08 21:06:33.258438] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:04.520 [2024-10-08 21:06:33.259758] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:04.520 [2024-10-08 21:06:33.259829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:04.781 [2024-10-08 21:06:33.368898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:05.042 [2024-10-08 21:06:33.591754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:05.042 [2024-10-08 21:06:33.591866] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:05.042 [2024-10-08 21:06:33.591903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:05.042 [2024-10-08 21:06:33.591934] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:05.042 [2024-10-08 21:06:33.591960] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:05.042 [2024-10-08 21:06:33.595518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:05.042 [2024-10-08 21:06:33.595611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:05.042 [2024-10-08 21:06:33.595677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:39:05.042 [2024-10-08 21:06:33.595683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:05.042 [2024-10-08 21:06:33.777489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:05.042 [2024-10-08 21:06:33.777712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:05.042 [2024-10-08 21:06:33.778013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:05.042 [2024-10-08 21:06:33.778974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:05.043 [2024-10-08 21:06:33.779549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:05.981 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 [2024-10-08 21:06:34.452646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 Malloc0 00:39:05.982 [2024-10-08 21:06:34.540637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1883750 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1883750 /var/tmp/bdevperf.sock 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1883750 ']' 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:05.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:05.982 { 00:39:05.982 "params": { 00:39:05.982 "name": "Nvme$subsystem", 00:39:05.982 "trtype": "$TEST_TRANSPORT", 00:39:05.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:05.982 "adrfam": "ipv4", 00:39:05.982 "trsvcid": "$NVMF_PORT", 00:39:05.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:05.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:05.982 "hdgst": ${hdgst:-false}, 00:39:05.982 "ddgst": ${ddgst:-false} 00:39:05.982 }, 00:39:05.982 "method": "bdev_nvme_attach_controller" 00:39:05.982 } 00:39:05.982 EOF 00:39:05.982 )") 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:39:05.982 21:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:05.982 "params": { 00:39:05.982 "name": "Nvme0", 00:39:05.982 "trtype": "tcp", 00:39:05.982 "traddr": "10.0.0.2", 00:39:05.982 "adrfam": "ipv4", 00:39:05.982 "trsvcid": "4420", 00:39:05.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:05.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:05.982 "hdgst": false, 00:39:05.982 "ddgst": false 00:39:05.982 }, 00:39:05.982 "method": "bdev_nvme_attach_controller" 00:39:05.982 }' 00:39:05.982 [2024-10-08 21:06:34.627015] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:05.982 [2024-10-08 21:06:34.627118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883750 ] 00:39:05.982 [2024-10-08 21:06:34.702154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.241 [2024-10-08 21:06:34.822732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.499 Running I/O for 10 seconds... 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:39:06.499 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.758 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.018 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.018 [2024-10-08 21:06:35.528516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.018 [2024-10-08 21:06:35.528896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.528997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159bc40 is same with the state(6) to be set 00:39:07.019 [2024-10-08 21:06:35.529479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.529973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.529986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.019 [2024-10-08 21:06:35.530149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.019 [2024-10-08 21:06:35.530163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.530981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.530997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.020 [2024-10-08 21:06:35.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.020 [2024-10-08 21:06:35.531324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.021 [2024-10-08 21:06:35.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.021 [2024-10-08 21:06:35.531385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.021 [2024-10-08 21:06:35.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.021 [2024-10-08 21:06:35.531444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:07.021 [2024-10-08 21:06:35.531474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:07.021 [2024-10-08 21:06:35.531489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a270 is same with the state(6) to be set 00:39:07.021 [2024-10-08 21:06:35.531575] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x96a270 was disconnected and freed. reset controller. 00:39:07.021 [2024-10-08 21:06:35.532787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.021 task offset: 73728 on job bdev=Nvme0n1 fails 00:39:07.021 00:39:07.021 Latency(us) 00:39:07.021 [2024-10-08T19:06:35.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:07.021 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:07.021 Job: Nvme0n1 ended in about 0.40 seconds with error 00:39:07.021 Verification LBA range: start 0x0 length 0x400 00:39:07.021 Nvme0n1 : 0.40 1426.47 89.15 158.50 0.00 39259.15 5728.33 34564.17 00:39:07.021 [2024-10-08T19:06:35.784Z] =================================================================================================================== 00:39:07.021 [2024-10-08T19:06:35.784Z] Total : 1426.47 89.15 158.50 0.00 39259.15 5728.33 34564.17 00:39:07.021 [2024-10-08 21:06:35.536032] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:07.021 [2024-10-08 21:06:35.536079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x751100 (9): Bad file descriptor 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.021 21:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:07.021 [2024-10-08 21:06:35.580027] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1883750 00:39:07.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1883750) - No such process 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:07.956 { 00:39:07.956 "params": { 00:39:07.956 "name": "Nvme$subsystem", 00:39:07.956 "trtype": "$TEST_TRANSPORT", 00:39:07.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:07.956 "adrfam": "ipv4", 00:39:07.956 "trsvcid": "$NVMF_PORT", 00:39:07.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:07.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:07.956 "hdgst": ${hdgst:-false}, 00:39:07.956 "ddgst": ${ddgst:-false} 00:39:07.956 }, 00:39:07.956 "method": "bdev_nvme_attach_controller" 00:39:07.956 } 00:39:07.956 EOF 00:39:07.956 )") 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:39:07.956 21:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:07.956 "params": { 00:39:07.956 "name": "Nvme0", 00:39:07.956 "trtype": "tcp", 00:39:07.956 "traddr": "10.0.0.2", 00:39:07.956 "adrfam": "ipv4", 00:39:07.956 "trsvcid": "4420", 00:39:07.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.956 "hdgst": false, 00:39:07.956 "ddgst": false 00:39:07.956 }, 00:39:07.956 "method": "bdev_nvme_attach_controller" 00:39:07.956 }' 00:39:07.956 [2024-10-08 21:06:36.600647] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:07.956 [2024-10-08 21:06:36.600807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883919 ] 00:39:07.956 [2024-10-08 21:06:36.673914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.214 [2024-10-08 21:06:36.789134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.598 Running I/O for 1 seconds... 00:39:09.533 1560.00 IOPS, 97.50 MiB/s 00:39:09.533 Latency(us) 00:39:09.533 [2024-10-08T19:06:38.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.533 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:09.533 Verification LBA range: start 0x0 length 0x400 00:39:09.533 Nvme0n1 : 1.01 1601.96 100.12 0.00 0.00 39132.71 2257.35 34175.81 00:39:09.533 [2024-10-08T19:06:38.296Z] =================================================================================================================== 00:39:09.533 [2024-10-08T19:06:38.296Z] Total : 1601.96 100.12 0.00 0.00 39132.71 2257.35 34175.81 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.790 rmmod nvme_tcp 00:39:09.790 rmmod nvme_fabrics 00:39:09.790 rmmod nvme_keyring 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1883575 ']' 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1883575 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1883575 ']' 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1883575 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1883575 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1883575' 00:39:09.790 killing process with pid 1883575 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1883575 00:39:09.790 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1883575 00:39:10.360 [2024-10-08 21:06:38.884468] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.360 21:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.269 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:12.269 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:12.269 00:39:12.269 real 0m11.010s 00:39:12.269 user 0m19.853s 00:39:12.269 sys 0m4.727s 00:39:12.269 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.269 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.269 ************************************ 00:39:12.269 END TEST nvmf_host_management 00:39:12.269 ************************************ 00:39:12.527 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:12.527 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:12.527 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:12.528 ************************************ 00:39:12.528 START TEST nvmf_lvol 00:39:12.528 ************************************ 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:12.528 * Looking for test storage... 00:39:12.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.528 --rc genhtml_branch_coverage=1 00:39:12.528 --rc genhtml_function_coverage=1 00:39:12.528 --rc genhtml_legend=1 00:39:12.528 --rc geninfo_all_blocks=1 00:39:12.528 --rc geninfo_unexecuted_blocks=1 00:39:12.528 00:39:12.528 ' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.528 --rc genhtml_branch_coverage=1 00:39:12.528 --rc genhtml_function_coverage=1 00:39:12.528 --rc genhtml_legend=1 00:39:12.528 --rc geninfo_all_blocks=1 00:39:12.528 --rc geninfo_unexecuted_blocks=1 00:39:12.528 00:39:12.528 ' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.528 --rc genhtml_branch_coverage=1 00:39:12.528 --rc genhtml_function_coverage=1 00:39:12.528 --rc genhtml_legend=1 00:39:12.528 --rc geninfo_all_blocks=1 00:39:12.528 --rc geninfo_unexecuted_blocks=1 00:39:12.528 00:39:12.528 ' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.528 --rc genhtml_branch_coverage=1 00:39:12.528 --rc genhtml_function_coverage=1 00:39:12.528 --rc genhtml_legend=1 00:39:12.528 --rc geninfo_all_blocks=1 00:39:12.528 --rc geninfo_unexecuted_blocks=1 00:39:12.528 00:39:12.528 ' 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.528 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.788 21:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:16.076 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:16.076 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:16.076 Found net devices under 0000:84:00.0: cvl_0_0 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.076 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:16.077 Found net devices under 0000:84:00.1: cvl_0_1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:16.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:16.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:39:16.077 00:39:16.077 --- 10.0.0.2 ping statistics --- 00:39:16.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.077 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:16.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:16.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:39:16.077 00:39:16.077 --- 10.0.0.1 ping statistics --- 00:39:16.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.077 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1886249 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1886249 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1886249 ']' 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:16.077 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.077 [2024-10-08 21:06:44.481474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:16.077 [2024-10-08 21:06:44.482861] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:16.077 [2024-10-08 21:06:44.482927] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.077 [2024-10-08 21:06:44.559865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:16.077 [2024-10-08 21:06:44.686035] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.077 [2024-10-08 21:06:44.686107] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.077 [2024-10-08 21:06:44.686124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.077 [2024-10-08 21:06:44.686137] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.077 [2024-10-08 21:06:44.686149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.077 [2024-10-08 21:06:44.687222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.077 [2024-10-08 21:06:44.687277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:16.077 [2024-10-08 21:06:44.687281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.077 [2024-10-08 21:06:44.837152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.077 [2024-10-08 21:06:44.837444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:16.077 [2024-10-08 21:06:44.837452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.337 [2024-10-08 21:06:44.837841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.337 21:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:16.905 [2024-10-08 21:06:45.556344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.905 21:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:17.474 21:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:17.474 21:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:18.046 21:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:18.046 21:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:18.615 21:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:19.558 21:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d6fe4f31-62c5-4e17-92c2-26081e37ed26 00:39:19.558 21:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6fe4f31-62c5-4e17-92c2-26081e37ed26 lvol 20 00:39:19.823 21:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=08a8ffcf-c28e-4b9c-8cd1-ef640171b86b 00:39:19.823 21:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:20.393 21:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08a8ffcf-c28e-4b9c-8cd1-ef640171b86b 00:39:20.962 21:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:21.530 [2024-10-08 21:06:50.028348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.530 21:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:21.789 21:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1886935 00:39:21.789 21:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:21.789 21:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:22.724 21:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 08a8ffcf-c28e-4b9c-8cd1-ef640171b86b MY_SNAPSHOT 00:39:23.289 21:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d56a6078-f2c7-4598-bb02-0259091eb1c1 00:39:23.289 21:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 08a8ffcf-c28e-4b9c-8cd1-ef640171b86b 30 00:39:23.547 21:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d56a6078-f2c7-4598-bb02-0259091eb1c1 MY_CLONE 00:39:24.112 21:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7c2fd7a8-7b78-4c0f-a32a-e7d0d5a7dedd 00:39:24.112 21:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7c2fd7a8-7b78-4c0f-a32a-e7d0d5a7dedd 00:39:25.049 21:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1886935 00:39:33.182 Initializing NVMe Controllers 00:39:33.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:33.182 Controller IO queue size 128, less than required. 00:39:33.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:33.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:33.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:33.182 Initialization complete. Launching workers. 00:39:33.182 ======================================================== 00:39:33.182 Latency(us) 00:39:33.182 Device Information : IOPS MiB/s Average min max 00:39:33.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10415.42 40.69 12288.30 3364.04 58117.34 00:39:33.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10297.02 40.22 12434.26 3302.81 63638.91 00:39:33.182 ======================================================== 00:39:33.182 Total : 20712.44 80.91 12360.86 3302.81 63638.91 00:39:33.182 00:39:33.182 21:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.182 21:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08a8ffcf-c28e-4b9c-8cd1-ef640171b86b 00:39:33.183 21:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6fe4f31-62c5-4e17-92c2-26081e37ed26 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.751 rmmod nvme_tcp 00:39:33.751 rmmod nvme_fabrics 00:39:33.751 rmmod nvme_keyring 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1886249 ']' 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1886249 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1886249 ']' 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1886249 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:39:33.751 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1886249 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1886249' 00:39:34.010 killing process with pid 1886249 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1886249 00:39:34.010 21:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1886249 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.580 21:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.488 00:39:36.488 real 0m24.037s 00:39:36.488 user 1m3.570s 00:39:36.488 sys 0m9.770s 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:36.488 ************************************ 00:39:36.488 END TEST nvmf_lvol 00:39:36.488 ************************************ 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.488 ************************************ 00:39:36.488 START TEST nvmf_lvs_grow 00:39:36.488 ************************************ 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:36.488 * Looking for test storage... 00:39:36.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:39:36.488 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.748 --rc genhtml_branch_coverage=1 00:39:36.748 --rc genhtml_function_coverage=1 00:39:36.748 --rc genhtml_legend=1 00:39:36.748 --rc geninfo_all_blocks=1 00:39:36.748 --rc geninfo_unexecuted_blocks=1 00:39:36.748 00:39:36.748 ' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.748 --rc genhtml_branch_coverage=1 00:39:36.748 --rc genhtml_function_coverage=1 00:39:36.748 --rc genhtml_legend=1 00:39:36.748 --rc geninfo_all_blocks=1 00:39:36.748 --rc geninfo_unexecuted_blocks=1 00:39:36.748 00:39:36.748 ' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.748 --rc genhtml_branch_coverage=1 00:39:36.748 --rc genhtml_function_coverage=1 00:39:36.748 --rc genhtml_legend=1 00:39:36.748 --rc geninfo_all_blocks=1 00:39:36.748 --rc geninfo_unexecuted_blocks=1 00:39:36.748 00:39:36.748 ' 00:39:36.748 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:36.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.749 --rc genhtml_branch_coverage=1 00:39:36.749 --rc genhtml_function_coverage=1 00:39:36.749 --rc genhtml_legend=1 00:39:36.749 --rc geninfo_all_blocks=1 00:39:36.749 --rc geninfo_unexecuted_blocks=1 00:39:36.749 00:39:36.749 ' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:36.749 21:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:40.038 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:40.038 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:40.038 Found net devices under 0000:84:00.0: cvl_0_0 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:40.038 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:40.039 Found net devices under 0000:84:00.1: cvl_0_1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:39:40.039 00:39:40.039 --- 10.0.0.2 ping statistics --- 00:39:40.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.039 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:39:40.039 00:39:40.039 --- 10.0.0.1 ping statistics --- 00:39:40.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.039 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1890331 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1890331 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1890331 ']' 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:40.039 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:40.039 [2024-10-08 21:07:08.366567] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.039 [2024-10-08 21:07:08.368070] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:40.039 [2024-10-08 21:07:08.368146] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.039 [2024-10-08 21:07:08.491521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.039 [2024-10-08 21:07:08.699865] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.039 [2024-10-08 21:07:08.699982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.039 [2024-10-08 21:07:08.700019] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.039 [2024-10-08 21:07:08.700054] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.039 [2024-10-08 21:07:08.700082] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.039 [2024-10-08 21:07:08.701429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.299 [2024-10-08 21:07:08.874913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.299 [2024-10-08 21:07:08.875611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.299 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:40.299 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:39:40.299 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:40.299 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:40.299 21:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:40.299 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.299 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:41.237 [2024-10-08 21:07:09.706781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.237 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:41.237 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:41.237 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:41.237 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:41.237 ************************************ 00:39:41.237 START TEST lvs_grow_clean 00:39:41.237 ************************************ 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.238 21:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:41.807 21:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:41.807 21:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:42.374 21:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:42.374 21:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:42.374 21:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:42.942 21:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:42.942 21:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:42.942 21:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e lvol 150 00:39:43.882 21:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c93880c-b97a-47ff-a7b5-773bd26476ea 00:39:43.882 21:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:43.882 21:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:44.451 [2024-10-08 21:07:12.934459] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:44.451 [2024-10-08 21:07:12.934701] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:44.451 true 00:39:44.451 21:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:44.451 21:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:44.709 21:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:44.709 21:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:44.967 21:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c93880c-b97a-47ff-a7b5-773bd26476ea 00:39:45.534 21:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:45.534 [2024-10-08 21:07:14.266704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.534 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1891067 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1891067 /var/tmp/bdevperf.sock 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1891067 ']' 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:46.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:46.102 21:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:46.102 [2024-10-08 21:07:14.619351] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:46.102 [2024-10-08 21:07:14.619525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891067 ] 00:39:46.102 [2024-10-08 21:07:14.750642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.362 [2024-10-08 21:07:14.972984] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.621 21:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:46.621 21:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:39:46.621 21:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:47.187 Nvme0n1 00:39:47.187 21:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:47.445 [ 00:39:47.445 { 00:39:47.445 "name": "Nvme0n1", 00:39:47.445 "aliases": [ 00:39:47.445 "0c93880c-b97a-47ff-a7b5-773bd26476ea" 00:39:47.445 ], 00:39:47.445 "product_name": "NVMe disk", 00:39:47.445 "block_size": 4096, 00:39:47.445 "num_blocks": 38912, 00:39:47.445 "uuid": "0c93880c-b97a-47ff-a7b5-773bd26476ea", 00:39:47.445 "numa_id": 1, 00:39:47.445 "assigned_rate_limits": { 00:39:47.445 "rw_ios_per_sec": 0, 00:39:47.445 "rw_mbytes_per_sec": 0, 00:39:47.445 "r_mbytes_per_sec": 0, 00:39:47.445 "w_mbytes_per_sec": 0 00:39:47.445 }, 00:39:47.445 "claimed": false, 00:39:47.445 "zoned": false, 00:39:47.445 "supported_io_types": { 00:39:47.445 "read": true, 00:39:47.445 "write": true, 00:39:47.445 "unmap": true, 00:39:47.445 "flush": true, 00:39:47.445 "reset": true, 00:39:47.445 "nvme_admin": true, 00:39:47.445 "nvme_io": true, 00:39:47.445 "nvme_io_md": false, 00:39:47.445 "write_zeroes": true, 00:39:47.445 "zcopy": false, 00:39:47.445 "get_zone_info": false, 00:39:47.445 "zone_management": false, 00:39:47.445 "zone_append": false, 00:39:47.445 "compare": true, 00:39:47.445 "compare_and_write": true, 00:39:47.445 "abort": true, 00:39:47.445 "seek_hole": false, 00:39:47.445 "seek_data": false, 00:39:47.445 "copy": true, 00:39:47.445 "nvme_iov_md": false 00:39:47.445 }, 00:39:47.445 "memory_domains": [ 00:39:47.445 { 00:39:47.445 "dma_device_id": "system", 00:39:47.445 "dma_device_type": 1 00:39:47.445 } 00:39:47.445 ], 00:39:47.445 "driver_specific": { 00:39:47.445 "nvme": [ 00:39:47.445 { 00:39:47.445 "trid": { 00:39:47.445 "trtype": "TCP", 00:39:47.445 "adrfam": "IPv4", 00:39:47.445 "traddr": "10.0.0.2", 00:39:47.445 "trsvcid": "4420", 00:39:47.445 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:47.445 }, 00:39:47.445 "ctrlr_data": { 00:39:47.446 "cntlid": 1, 00:39:47.446 "vendor_id": "0x8086", 00:39:47.446 "model_number": "SPDK bdev Controller", 00:39:47.446 "serial_number": "SPDK0", 00:39:47.446 "firmware_revision": "25.01", 00:39:47.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:47.446 "oacs": { 00:39:47.446 "security": 0, 00:39:47.446 "format": 0, 00:39:47.446 "firmware": 0, 00:39:47.446 "ns_manage": 0 00:39:47.446 }, 00:39:47.446 "multi_ctrlr": true, 00:39:47.446 "ana_reporting": false 00:39:47.446 }, 00:39:47.446 "vs": { 00:39:47.446 "nvme_version": "1.3" 00:39:47.446 }, 00:39:47.446 "ns_data": { 00:39:47.446 "id": 1, 00:39:47.446 "can_share": true 00:39:47.446 } 00:39:47.446 } 00:39:47.446 ], 00:39:47.446 "mp_policy": "active_passive" 00:39:47.446 } 00:39:47.446 } 00:39:47.446 ] 00:39:47.446 21:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1891291 00:39:47.446 21:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:47.446 21:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:47.704 Running I/O for 10 seconds... 00:39:48.641 Latency(us) 00:39:48.641 [2024-10-08T19:07:17.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.641 Nvme0n1 : 1.00 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:39:48.641 [2024-10-08T19:07:17.404Z] =================================================================================================================== 00:39:48.641 [2024-10-08T19:07:17.404Z] Total : 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:39:48.641 00:39:49.581 21:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:49.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.581 Nvme0n1 : 2.00 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:39:49.581 [2024-10-08T19:07:18.344Z] =================================================================================================================== 00:39:49.581 [2024-10-08T19:07:18.344Z] Total : 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:39:49.581 00:39:50.150 true 00:39:50.150 21:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:50.150 21:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:50.410 21:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:50.410 21:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:50.410 21:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1891291 00:39:50.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.670 Nvme0n1 : 3.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:39:50.670 [2024-10-08T19:07:19.433Z] =================================================================================================================== 00:39:50.670 [2024-10-08T19:07:19.433Z] Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:39:50.670 00:39:51.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.610 Nvme0n1 : 4.00 7469.75 29.18 0.00 0.00 0.00 0.00 0.00 00:39:51.610 [2024-10-08T19:07:20.373Z] =================================================================================================================== 00:39:51.610 [2024-10-08T19:07:20.373Z] Total : 7469.75 29.18 0.00 0.00 0.00 0.00 0.00 00:39:51.610 00:39:52.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.596 Nvme0n1 : 5.00 7271.20 28.40 0.00 0.00 0.00 0.00 0.00 00:39:52.596 [2024-10-08T19:07:21.359Z] =================================================================================================================== 00:39:52.596 [2024-10-08T19:07:21.359Z] Total : 7271.20 28.40 0.00 0.00 0.00 0.00 0.00 00:39:52.596 00:39:53.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.978 Nvme0n1 : 6.00 7138.83 27.89 0.00 0.00 0.00 0.00 0.00 00:39:53.978 [2024-10-08T19:07:22.741Z] =================================================================================================================== 00:39:53.978 [2024-10-08T19:07:22.741Z] Total : 7138.83 27.89 0.00 0.00 0.00 0.00 0.00 00:39:53.978 00:39:54.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.918 Nvme0n1 : 7.00 7026.14 27.45 0.00 0.00 0.00 0.00 0.00 00:39:54.918 [2024-10-08T19:07:23.681Z] =================================================================================================================== 00:39:54.918 [2024-10-08T19:07:23.681Z] Total : 7026.14 27.45 0.00 0.00 0.00 0.00 0.00 00:39:54.918 00:39:55.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.858 Nvme0n1 : 8.00 6989.25 27.30 0.00 0.00 0.00 0.00 0.00 00:39:55.858 [2024-10-08T19:07:24.621Z] =================================================================================================================== 00:39:55.858 [2024-10-08T19:07:24.621Z] Total : 6989.25 27.30 0.00 0.00 0.00 0.00 0.00 00:39:55.858 00:39:56.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.798 Nvme0n1 : 9.00 6960.56 27.19 0.00 0.00 0.00 0.00 0.00 00:39:56.798 [2024-10-08T19:07:25.561Z] =================================================================================================================== 00:39:56.798 [2024-10-08T19:07:25.561Z] Total : 6960.56 27.19 0.00 0.00 0.00 0.00 0.00 00:39:56.798 00:39:57.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.776 Nvme0n1 : 10.00 6905.90 26.98 0.00 0.00 0.00 0.00 0.00 00:39:57.776 [2024-10-08T19:07:26.539Z] =================================================================================================================== 00:39:57.776 [2024-10-08T19:07:26.539Z] Total : 6905.90 26.98 0.00 0.00 0.00 0.00 0.00 00:39:57.776 00:39:57.776 00:39:57.776 Latency(us) 00:39:57.776 [2024-10-08T19:07:26.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.776 Nvme0n1 : 10.01 6904.94 26.97 0.00 0.00 18521.91 7864.32 38836.15 00:39:57.776 [2024-10-08T19:07:26.539Z] =================================================================================================================== 00:39:57.776 [2024-10-08T19:07:26.539Z] Total : 6904.94 26.97 0.00 0.00 18521.91 7864.32 38836.15 00:39:57.776 { 00:39:57.776 "results": [ 00:39:57.776 { 00:39:57.776 "job": "Nvme0n1", 00:39:57.776 "core_mask": "0x2", 00:39:57.776 "workload": "randwrite", 00:39:57.776 "status": "finished", 00:39:57.776 "queue_depth": 128, 00:39:57.776 "io_size": 4096, 00:39:57.776 "runtime": 10.010654, 00:39:57.776 "iops": 6904.943473223628, 00:39:57.776 "mibps": 26.972435442279796, 00:39:57.776 "io_failed": 0, 00:39:57.776 "io_timeout": 0, 00:39:57.776 "avg_latency_us": 18521.906835790844, 00:39:57.776 "min_latency_us": 7864.32, 00:39:57.776 "max_latency_us": 38836.148148148146 00:39:57.776 } 00:39:57.776 ], 00:39:57.776 "core_count": 1 00:39:57.776 } 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1891067 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1891067 ']' 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1891067 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1891067 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1891067' 00:39:57.776 killing process with pid 1891067 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1891067 00:39:57.776 Received shutdown signal, test time was about 10.000000 seconds 00:39:57.776 00:39:57.776 Latency(us) 00:39:57.776 [2024-10-08T19:07:26.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.776 [2024-10-08T19:07:26.539Z] =================================================================================================================== 00:39:57.776 [2024-10-08T19:07:26.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:57.776 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1891067 00:39:58.367 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:58.935 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:59.504 21:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:39:59.504 21:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:00.444 21:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:00.444 21:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:00.444 21:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:01.015 [2024-10-08 21:07:29.522550] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:01.015 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:01.584 request: 00:40:01.584 { 00:40:01.584 "uuid": "8f7c8cca-3fa3-47c8-bee5-99d6833f997e", 00:40:01.584 "method": "bdev_lvol_get_lvstores", 00:40:01.584 "req_id": 1 00:40:01.584 } 00:40:01.584 Got JSON-RPC error response 00:40:01.584 response: 00:40:01.584 { 00:40:01.584 "code": -19, 00:40:01.584 "message": "No such device" 00:40:01.584 } 00:40:01.584 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:40:01.584 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:01.584 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:01.584 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:01.584 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:02.151 aio_bdev 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c93880c-b97a-47ff-a7b5-773bd26476ea 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0c93880c-b97a-47ff-a7b5-773bd26476ea 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:02.151 21:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:02.717 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c93880c-b97a-47ff-a7b5-773bd26476ea -t 2000 00:40:02.976 [ 00:40:02.976 { 00:40:02.976 "name": "0c93880c-b97a-47ff-a7b5-773bd26476ea", 00:40:02.976 "aliases": [ 00:40:02.976 "lvs/lvol" 00:40:02.976 ], 00:40:02.976 "product_name": "Logical Volume", 00:40:02.976 "block_size": 4096, 00:40:02.976 "num_blocks": 38912, 00:40:02.976 "uuid": "0c93880c-b97a-47ff-a7b5-773bd26476ea", 00:40:02.976 "assigned_rate_limits": { 00:40:02.976 "rw_ios_per_sec": 0, 00:40:02.976 "rw_mbytes_per_sec": 0, 00:40:02.976 "r_mbytes_per_sec": 0, 00:40:02.976 "w_mbytes_per_sec": 0 00:40:02.976 }, 00:40:02.976 "claimed": false, 00:40:02.976 "zoned": false, 00:40:02.976 "supported_io_types": { 00:40:02.976 "read": true, 00:40:02.976 "write": true, 00:40:02.976 "unmap": true, 00:40:02.976 "flush": false, 00:40:02.976 "reset": true, 00:40:02.976 "nvme_admin": false, 00:40:02.976 "nvme_io": false, 00:40:02.976 "nvme_io_md": false, 00:40:02.976 "write_zeroes": true, 00:40:02.976 "zcopy": false, 00:40:02.976 "get_zone_info": false, 00:40:02.976 "zone_management": false, 00:40:02.976 "zone_append": false, 00:40:02.976 "compare": false, 00:40:02.976 "compare_and_write": false, 00:40:02.976 "abort": false, 00:40:02.976 "seek_hole": true, 00:40:02.976 "seek_data": true, 00:40:02.976 "copy": false, 00:40:02.976 "nvme_iov_md": false 00:40:02.976 }, 00:40:02.976 "driver_specific": { 00:40:02.976 "lvol": { 00:40:02.976 "lvol_store_uuid": "8f7c8cca-3fa3-47c8-bee5-99d6833f997e", 00:40:02.976 "base_bdev": "aio_bdev", 00:40:02.976 "thin_provision": false, 00:40:02.976 "num_allocated_clusters": 38, 00:40:02.976 "snapshot": false, 00:40:02.976 "clone": false, 00:40:02.976 "esnap_clone": false 00:40:02.976 } 00:40:02.976 } 00:40:02.976 } 00:40:02.976 ] 00:40:03.236 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:40:03.236 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:03.236 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:03.494 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:03.494 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:03.494 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:03.753 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:03.753 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c93880c-b97a-47ff-a7b5-773bd26476ea 00:40:04.323 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f7c8cca-3fa3-47c8-bee5-99d6833f997e 00:40:04.892 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:05.460 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.460 00:40:05.460 real 0m24.429s 00:40:05.460 user 0m24.093s 00:40:05.460 sys 0m2.769s 00:40:05.460 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:05.460 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:05.460 ************************************ 00:40:05.460 END TEST lvs_grow_clean 00:40:05.460 ************************************ 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:05.720 ************************************ 00:40:05.720 START TEST lvs_grow_dirty 00:40:05.720 ************************************ 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.720 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:06.289 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:06.289 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:07.226 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:07.226 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:07.226 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:07.484 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:07.484 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:07.484 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 lvol 150 00:40:07.744 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:07.744 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:07.744 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:08.313 [2024-10-08 21:07:37.030458] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:08.313 [2024-10-08 21:07:37.030702] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:08.313 true 00:40:08.313 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:08.313 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:09.251 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:09.252 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:09.252 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:10.191 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:10.761 [2024-10-08 21:07:39.359037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.761 21:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1893960 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1893960 /var/tmp/bdevperf.sock 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1893960 ']' 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:11.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:11.331 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:11.331 [2024-10-08 21:07:40.068695] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:11.331 [2024-10-08 21:07:40.068807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893960 ] 00:40:11.590 [2024-10-08 21:07:40.195190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.590 [2024-10-08 21:07:40.351823] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.849 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:11.849 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:11.849 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:12.417 Nvme0n1 00:40:12.417 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:12.676 [ 00:40:12.676 { 00:40:12.676 "name": "Nvme0n1", 00:40:12.676 "aliases": [ 00:40:12.676 "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d" 00:40:12.676 ], 00:40:12.676 "product_name": "NVMe disk", 00:40:12.676 "block_size": 4096, 00:40:12.676 "num_blocks": 38912, 00:40:12.676 "uuid": "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d", 00:40:12.676 "numa_id": 1, 00:40:12.676 "assigned_rate_limits": { 00:40:12.676 "rw_ios_per_sec": 0, 00:40:12.676 "rw_mbytes_per_sec": 0, 00:40:12.676 "r_mbytes_per_sec": 0, 00:40:12.676 "w_mbytes_per_sec": 0 00:40:12.676 }, 00:40:12.676 "claimed": false, 00:40:12.676 "zoned": false, 00:40:12.676 "supported_io_types": { 00:40:12.676 "read": true, 00:40:12.676 "write": true, 00:40:12.676 "unmap": true, 00:40:12.676 "flush": true, 00:40:12.676 "reset": true, 00:40:12.676 "nvme_admin": true, 00:40:12.676 "nvme_io": true, 00:40:12.676 "nvme_io_md": false, 00:40:12.676 "write_zeroes": true, 00:40:12.676 "zcopy": false, 00:40:12.676 "get_zone_info": false, 00:40:12.676 "zone_management": false, 00:40:12.676 "zone_append": false, 00:40:12.676 "compare": true, 00:40:12.676 "compare_and_write": true, 00:40:12.676 "abort": true, 00:40:12.676 "seek_hole": false, 00:40:12.676 "seek_data": false, 00:40:12.676 "copy": true, 00:40:12.676 "nvme_iov_md": false 00:40:12.676 }, 00:40:12.676 "memory_domains": [ 00:40:12.676 { 00:40:12.676 "dma_device_id": "system", 00:40:12.676 "dma_device_type": 1 00:40:12.676 } 00:40:12.676 ], 00:40:12.676 "driver_specific": { 00:40:12.676 "nvme": [ 00:40:12.676 { 00:40:12.676 "trid": { 00:40:12.676 "trtype": "TCP", 00:40:12.676 "adrfam": "IPv4", 00:40:12.676 "traddr": "10.0.0.2", 00:40:12.676 "trsvcid": "4420", 00:40:12.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:12.676 }, 00:40:12.676 "ctrlr_data": { 00:40:12.676 "cntlid": 1, 00:40:12.676 "vendor_id": "0x8086", 00:40:12.676 "model_number": "SPDK bdev Controller", 00:40:12.676 "serial_number": "SPDK0", 00:40:12.676 "firmware_revision": "25.01", 00:40:12.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.676 "oacs": { 00:40:12.676 "security": 0, 00:40:12.676 "format": 0, 00:40:12.676 "firmware": 0, 00:40:12.676 "ns_manage": 0 00:40:12.676 }, 00:40:12.676 "multi_ctrlr": true, 00:40:12.676 "ana_reporting": false 00:40:12.676 }, 00:40:12.676 "vs": { 00:40:12.676 "nvme_version": "1.3" 00:40:12.676 }, 00:40:12.676 "ns_data": { 00:40:12.676 "id": 1, 00:40:12.676 "can_share": true 00:40:12.676 } 00:40:12.676 } 00:40:12.676 ], 00:40:12.676 "mp_policy": "active_passive" 00:40:12.676 } 00:40:12.676 } 00:40:12.676 ] 00:40:12.676 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1894097 00:40:12.676 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:12.676 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:12.936 Running I/O for 10 seconds... 00:40:13.876 Latency(us) 00:40:13.876 [2024-10-08T19:07:42.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:13.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.876 Nvme0n1 : 1.00 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:40:13.876 [2024-10-08T19:07:42.639Z] =================================================================================================================== 00:40:13.876 [2024-10-08T19:07:42.639Z] Total : 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:40:13.876 00:40:14.817 21:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:14.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.817 Nvme0n1 : 2.00 6159.50 24.06 0.00 0.00 0.00 0.00 0.00 00:40:14.817 [2024-10-08T19:07:43.580Z] =================================================================================================================== 00:40:14.817 [2024-10-08T19:07:43.580Z] Total : 6159.50 24.06 0.00 0.00 0.00 0.00 0.00 00:40:14.817 00:40:15.386 true 00:40:15.386 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:15.386 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:15.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.956 Nvme0n1 : 3.00 6265.33 24.47 0.00 0.00 0.00 0.00 0.00 00:40:15.956 [2024-10-08T19:07:44.719Z] =================================================================================================================== 00:40:15.956 [2024-10-08T19:07:44.719Z] Total : 6265.33 24.47 0.00 0.00 0.00 0.00 0.00 00:40:15.956 00:40:15.956 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:15.956 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:15.956 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1894097 00:40:16.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:16.895 Nvme0n1 : 4.00 6254.75 24.43 0.00 0.00 0.00 0.00 0.00 00:40:16.895 [2024-10-08T19:07:45.658Z] =================================================================================================================== 00:40:16.895 [2024-10-08T19:07:45.658Z] Total : 6254.75 24.43 0.00 0.00 0.00 0.00 0.00 00:40:16.895 00:40:17.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.835 Nvme0n1 : 5.00 6299.20 24.61 0.00 0.00 0.00 0.00 0.00 00:40:17.835 [2024-10-08T19:07:46.598Z] =================================================================================================================== 00:40:17.835 [2024-10-08T19:07:46.598Z] Total : 6299.20 24.61 0.00 0.00 0.00 0.00 0.00 00:40:17.835 00:40:19.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.217 Nvme0n1 : 6.00 6328.83 24.72 0.00 0.00 0.00 0.00 0.00 00:40:19.217 [2024-10-08T19:07:47.980Z] =================================================================================================================== 00:40:19.217 [2024-10-08T19:07:47.980Z] Total : 6328.83 24.72 0.00 0.00 0.00 0.00 0.00 00:40:19.217 00:40:20.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.156 Nvme0n1 : 7.00 6331.86 24.73 0.00 0.00 0.00 0.00 0.00 00:40:20.156 [2024-10-08T19:07:48.919Z] =================================================================================================================== 00:40:20.156 [2024-10-08T19:07:48.919Z] Total : 6331.86 24.73 0.00 0.00 0.00 0.00 0.00 00:40:20.156 00:40:21.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:21.095 Nvme0n1 : 8.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:40:21.095 [2024-10-08T19:07:49.858Z] =================================================================================================================== 00:40:21.095 [2024-10-08T19:07:49.858Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:40:21.095 00:40:22.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:22.035 Nvme0n1 : 9.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:40:22.035 [2024-10-08T19:07:50.798Z] =================================================================================================================== 00:40:22.035 [2024-10-08T19:07:50.798Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:40:22.035 00:40:22.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:22.973 Nvme0n1 : 10.00 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:40:22.973 [2024-10-08T19:07:51.736Z] =================================================================================================================== 00:40:22.973 [2024-10-08T19:07:51.736Z] Total : 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:40:22.973 00:40:22.973 00:40:22.973 Latency(us) 00:40:22.973 [2024-10-08T19:07:51.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:22.973 Nvme0n1 : 10.02 6438.21 25.15 0.00 0.00 19867.78 9175.04 48545.19 00:40:22.973 [2024-10-08T19:07:51.736Z] =================================================================================================================== 00:40:22.973 [2024-10-08T19:07:51.736Z] Total : 6438.21 25.15 0.00 0.00 19867.78 9175.04 48545.19 00:40:22.973 { 00:40:22.973 "results": [ 00:40:22.973 { 00:40:22.973 "job": "Nvme0n1", 00:40:22.973 "core_mask": "0x2", 00:40:22.973 "workload": "randwrite", 00:40:22.973 "status": "finished", 00:40:22.973 "queue_depth": 128, 00:40:22.973 "io_size": 4096, 00:40:22.973 "runtime": 10.020951, 00:40:22.973 "iops": 6438.211303498041, 00:40:22.973 "mibps": 25.149262904289223, 00:40:22.973 "io_failed": 0, 00:40:22.973 "io_timeout": 0, 00:40:22.973 "avg_latency_us": 19867.782758514983, 00:40:22.973 "min_latency_us": 9175.04, 00:40:22.973 "max_latency_us": 48545.18518518518 00:40:22.973 } 00:40:22.973 ], 00:40:22.973 "core_count": 1 00:40:22.973 } 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1893960 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1893960 ']' 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1893960 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1893960 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1893960' 00:40:22.973 killing process with pid 1893960 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1893960 00:40:22.973 Received shutdown signal, test time was about 10.000000 seconds 00:40:22.973 00:40:22.973 Latency(us) 00:40:22.973 [2024-10-08T19:07:51.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.973 [2024-10-08T19:07:51.736Z] =================================================================================================================== 00:40:22.973 [2024-10-08T19:07:51.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:22.973 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1893960 00:40:23.543 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:24.135 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:24.751 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:24.751 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:25.316 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:25.316 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:25.316 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1890331 00:40:25.316 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1890331 00:40:25.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1890331 Killed "${NVMF_APP[@]}" "$@" 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1895518 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1895518 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1895518 ']' 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:25.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:25.575 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:25.575 [2024-10-08 21:07:54.187617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:25.575 [2024-10-08 21:07:54.189194] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:25.575 [2024-10-08 21:07:54.189272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:25.575 [2024-10-08 21:07:54.308394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.834 [2024-10-08 21:07:54.532554] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:25.834 [2024-10-08 21:07:54.532675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:25.834 [2024-10-08 21:07:54.532727] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:25.834 [2024-10-08 21:07:54.532758] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:25.834 [2024-10-08 21:07:54.532790] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:25.834 [2024-10-08 21:07:54.534085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.093 [2024-10-08 21:07:54.707685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.093 [2024-10-08 21:07:54.708351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.093 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:27.029 [2024-10-08 21:07:55.464281] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:27.029 [2024-10-08 21:07:55.464576] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:27.029 [2024-10-08 21:07:55.464737] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:27.029 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:27.287 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d -t 2000 00:40:27.854 [ 00:40:27.854 { 00:40:27.854 "name": "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d", 00:40:27.854 "aliases": [ 00:40:27.854 "lvs/lvol" 00:40:27.854 ], 00:40:27.854 "product_name": "Logical Volume", 00:40:27.854 "block_size": 4096, 00:40:27.854 "num_blocks": 38912, 00:40:27.854 "uuid": "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d", 00:40:27.854 "assigned_rate_limits": { 00:40:27.854 "rw_ios_per_sec": 0, 00:40:27.854 "rw_mbytes_per_sec": 0, 00:40:27.854 "r_mbytes_per_sec": 0, 00:40:27.854 "w_mbytes_per_sec": 0 00:40:27.854 }, 00:40:27.854 "claimed": false, 00:40:27.854 "zoned": false, 00:40:27.854 "supported_io_types": { 00:40:27.854 "read": true, 00:40:27.854 "write": true, 00:40:27.854 "unmap": true, 00:40:27.854 "flush": false, 00:40:27.854 "reset": true, 00:40:27.854 "nvme_admin": false, 00:40:27.854 "nvme_io": false, 00:40:27.854 "nvme_io_md": false, 00:40:27.854 "write_zeroes": true, 00:40:27.854 "zcopy": false, 00:40:27.854 "get_zone_info": false, 00:40:27.854 "zone_management": false, 00:40:27.854 "zone_append": false, 00:40:27.854 "compare": false, 00:40:27.854 "compare_and_write": false, 00:40:27.854 "abort": false, 00:40:27.854 "seek_hole": true, 00:40:27.854 "seek_data": true, 00:40:27.854 "copy": false, 00:40:27.854 "nvme_iov_md": false 00:40:27.854 }, 00:40:27.854 "driver_specific": { 00:40:27.854 "lvol": { 00:40:27.854 "lvol_store_uuid": "30e2dcc4-12d8-43db-b473-00eed12b2d70", 00:40:27.854 "base_bdev": "aio_bdev", 00:40:27.854 "thin_provision": false, 00:40:27.854 "num_allocated_clusters": 38, 00:40:27.854 "snapshot": false, 00:40:27.854 "clone": false, 00:40:27.854 "esnap_clone": false 00:40:27.854 } 00:40:27.854 } 00:40:27.854 } 00:40:27.854 ] 00:40:27.854 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:27.854 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:27.854 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:28.789 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:28.789 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:28.789 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:29.356 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:29.356 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:29.922 [2024-10-08 21:07:58.627193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:29.922 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:29.922 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:29.922 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:29.922 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:30.181 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:30.748 request: 00:40:30.748 { 00:40:30.748 "uuid": "30e2dcc4-12d8-43db-b473-00eed12b2d70", 00:40:30.748 "method": "bdev_lvol_get_lvstores", 00:40:30.748 "req_id": 1 00:40:30.748 } 00:40:30.748 Got JSON-RPC error response 00:40:30.748 response: 00:40:30.748 { 00:40:30.748 "code": -19, 00:40:30.748 "message": "No such device" 00:40:30.748 } 00:40:30.748 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:30.748 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:30.748 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:30.748 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:30.748 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:31.316 aio_bdev 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:31.316 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:31.882 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d -t 2000 00:40:32.450 [ 00:40:32.450 { 00:40:32.450 "name": "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d", 00:40:32.450 "aliases": [ 00:40:32.450 "lvs/lvol" 00:40:32.450 ], 00:40:32.450 "product_name": "Logical Volume", 00:40:32.450 "block_size": 4096, 00:40:32.450 "num_blocks": 38912, 00:40:32.450 "uuid": "9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d", 00:40:32.450 "assigned_rate_limits": { 00:40:32.450 "rw_ios_per_sec": 0, 00:40:32.450 "rw_mbytes_per_sec": 0, 00:40:32.450 "r_mbytes_per_sec": 0, 00:40:32.450 "w_mbytes_per_sec": 0 00:40:32.450 }, 00:40:32.450 "claimed": false, 00:40:32.450 "zoned": false, 00:40:32.450 "supported_io_types": { 00:40:32.450 "read": true, 00:40:32.450 "write": true, 00:40:32.450 "unmap": true, 00:40:32.450 "flush": false, 00:40:32.450 "reset": true, 00:40:32.450 "nvme_admin": false, 00:40:32.450 "nvme_io": false, 00:40:32.450 "nvme_io_md": false, 00:40:32.450 "write_zeroes": true, 00:40:32.450 "zcopy": false, 00:40:32.450 "get_zone_info": false, 00:40:32.450 "zone_management": false, 00:40:32.450 "zone_append": false, 00:40:32.450 "compare": false, 00:40:32.450 "compare_and_write": false, 00:40:32.450 "abort": false, 00:40:32.450 "seek_hole": true, 00:40:32.450 "seek_data": true, 00:40:32.450 "copy": false, 00:40:32.450 "nvme_iov_md": false 00:40:32.450 }, 00:40:32.450 "driver_specific": { 00:40:32.450 "lvol": { 00:40:32.450 "lvol_store_uuid": "30e2dcc4-12d8-43db-b473-00eed12b2d70", 00:40:32.450 "base_bdev": "aio_bdev", 00:40:32.450 "thin_provision": false, 00:40:32.450 "num_allocated_clusters": 38, 00:40:32.450 "snapshot": false, 00:40:32.450 "clone": false, 00:40:32.450 "esnap_clone": false 00:40:32.450 } 00:40:32.450 } 00:40:32.450 } 00:40:32.450 ] 00:40:32.450 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:32.450 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:32.450 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:33.018 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:33.018 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:33.018 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:33.276 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:33.276 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d711aa4-9cf5-4f8b-83b8-1d6650d7d52d 00:40:33.842 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30e2dcc4-12d8-43db-b473-00eed12b2d70 00:40:34.101 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:35.040 00:40:35.040 real 0m29.198s 00:40:35.040 user 0m46.131s 00:40:35.040 sys 0m6.284s 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:35.040 ************************************ 00:40:35.040 END TEST lvs_grow_dirty 00:40:35.040 ************************************ 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:35.040 nvmf_trace.0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:35.040 rmmod nvme_tcp 00:40:35.040 rmmod nvme_fabrics 00:40:35.040 rmmod nvme_keyring 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:35.040 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1895518 ']' 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1895518 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1895518 ']' 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1895518 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1895518 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1895518' 00:40:35.041 killing process with pid 1895518 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1895518 00:40:35.041 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1895518 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:35.609 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:37.520 00:40:37.520 real 1m1.035s 00:40:37.520 user 1m13.397s 00:40:37.520 sys 0m11.978s 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:37.520 ************************************ 00:40:37.520 END TEST nvmf_lvs_grow 00:40:37.520 ************************************ 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:37.520 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:37.779 ************************************ 00:40:37.779 START TEST nvmf_bdev_io_wait 00:40:37.779 ************************************ 00:40:37.779 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:37.779 * Looking for test storage... 00:40:37.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:37.779 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:37.779 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:40:37.779 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:38.039 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:38.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.040 --rc genhtml_branch_coverage=1 00:40:38.040 --rc genhtml_function_coverage=1 00:40:38.040 --rc genhtml_legend=1 00:40:38.040 --rc geninfo_all_blocks=1 00:40:38.040 --rc geninfo_unexecuted_blocks=1 00:40:38.040 00:40:38.040 ' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:38.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.040 --rc genhtml_branch_coverage=1 00:40:38.040 --rc genhtml_function_coverage=1 00:40:38.040 --rc genhtml_legend=1 00:40:38.040 --rc geninfo_all_blocks=1 00:40:38.040 --rc geninfo_unexecuted_blocks=1 00:40:38.040 00:40:38.040 ' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:38.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.040 --rc genhtml_branch_coverage=1 00:40:38.040 --rc genhtml_function_coverage=1 00:40:38.040 --rc genhtml_legend=1 00:40:38.040 --rc geninfo_all_blocks=1 00:40:38.040 --rc geninfo_unexecuted_blocks=1 00:40:38.040 00:40:38.040 ' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:38.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.040 --rc genhtml_branch_coverage=1 00:40:38.040 --rc genhtml_function_coverage=1 00:40:38.040 --rc genhtml_legend=1 00:40:38.040 --rc geninfo_all_blocks=1 00:40:38.040 --rc geninfo_unexecuted_blocks=1 00:40:38.040 00:40:38.040 ' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:38.040 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:41.334 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:41.335 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:41.335 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:41.335 Found net devices under 0000:84:00.0: cvl_0_0 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:41.335 Found net devices under 0000:84:00.1: cvl_0_1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:41.335 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:41.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:41.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:40:41.336 00:40:41.336 --- 10.0.0.2 ping statistics --- 00:40:41.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.336 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:41.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:41.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:40:41.336 00:40:41.336 --- 10.0.0.1 ping statistics --- 00:40:41.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.336 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1898597 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1898597 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1898597 ']' 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:41.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:41.336 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.336 [2024-10-08 21:08:09.673334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:41.336 [2024-10-08 21:08:09.674711] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:41.336 [2024-10-08 21:08:09.674777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:41.336 [2024-10-08 21:08:09.786218] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:41.336 [2024-10-08 21:08:09.999955] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:41.336 [2024-10-08 21:08:10.000075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:41.336 [2024-10-08 21:08:10.000111] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:41.336 [2024-10-08 21:08:10.000141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:41.336 [2024-10-08 21:08:10.000167] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:41.336 [2024-10-08 21:08:10.003559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.336 [2024-10-08 21:08:10.003619] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:41.336 [2024-10-08 21:08:10.003740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:41.336 [2024-10-08 21:08:10.003745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.336 [2024-10-08 21:08:10.004312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.596 [2024-10-08 21:08:10.342006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:41.596 [2024-10-08 21:08:10.342460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:41.596 [2024-10-08 21:08:10.343889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:41.596 [2024-10-08 21:08:10.345264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.596 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.596 [2024-10-08 21:08:10.356758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.855 Malloc0 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.855 [2024-10-08 21:08:10.452729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1898746 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1898747 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1898749 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1898751 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:41.855 { 00:40:41.855 "params": { 00:40:41.855 "name": "Nvme$subsystem", 00:40:41.855 "trtype": "$TEST_TRANSPORT", 00:40:41.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.855 "adrfam": "ipv4", 00:40:41.855 "trsvcid": "$NVMF_PORT", 00:40:41.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.855 "hdgst": ${hdgst:-false}, 00:40:41.855 "ddgst": ${ddgst:-false} 00:40:41.855 }, 00:40:41.855 "method": "bdev_nvme_attach_controller" 00:40:41.855 } 00:40:41.855 EOF 00:40:41.855 )") 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:41.855 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:41.856 { 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme$subsystem", 00:40:41.856 "trtype": "$TEST_TRANSPORT", 00:40:41.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "$NVMF_PORT", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.856 "hdgst": ${hdgst:-false}, 00:40:41.856 "ddgst": ${ddgst:-false} 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 } 00:40:41.856 EOF 00:40:41.856 )") 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:41.856 { 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme$subsystem", 00:40:41.856 "trtype": "$TEST_TRANSPORT", 00:40:41.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "$NVMF_PORT", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.856 "hdgst": ${hdgst:-false}, 00:40:41.856 "ddgst": ${ddgst:-false} 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 } 00:40:41.856 EOF 00:40:41.856 )") 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:41.856 { 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme$subsystem", 00:40:41.856 "trtype": "$TEST_TRANSPORT", 00:40:41.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "$NVMF_PORT", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.856 "hdgst": ${hdgst:-false}, 00:40:41.856 "ddgst": ${ddgst:-false} 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 } 00:40:41.856 EOF 00:40:41.856 )") 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1898746 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme1", 00:40:41.856 "trtype": "tcp", 00:40:41.856 "traddr": "10.0.0.2", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "4420", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:41.856 "hdgst": false, 00:40:41.856 "ddgst": false 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 }' 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme1", 00:40:41.856 "trtype": "tcp", 00:40:41.856 "traddr": "10.0.0.2", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "4420", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:41.856 "hdgst": false, 00:40:41.856 "ddgst": false 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 }' 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme1", 00:40:41.856 "trtype": "tcp", 00:40:41.856 "traddr": "10.0.0.2", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "4420", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:41.856 "hdgst": false, 00:40:41.856 "ddgst": false 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 }' 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:41.856 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:41.856 "params": { 00:40:41.856 "name": "Nvme1", 00:40:41.856 "trtype": "tcp", 00:40:41.856 "traddr": "10.0.0.2", 00:40:41.856 "adrfam": "ipv4", 00:40:41.856 "trsvcid": "4420", 00:40:41.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:41.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:41.856 "hdgst": false, 00:40:41.856 "ddgst": false 00:40:41.856 }, 00:40:41.856 "method": "bdev_nvme_attach_controller" 00:40:41.856 }' 00:40:41.856 [2024-10-08 21:08:10.510417] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:41.856 [2024-10-08 21:08:10.510417] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:41.856 [2024-10-08 21:08:10.510417] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:41.856 [2024-10-08 21:08:10.510521] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 21:08:10.510521] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 21:08:10.510521] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:41.856 --proc-type=auto ] 00:40:41.856 --proc-type=auto ] 00:40:41.856 [2024-10-08 21:08:10.510619] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:41.856 [2024-10-08 21:08:10.510717] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:42.114 [2024-10-08 21:08:10.691855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.114 [2024-10-08 21:08:10.802106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:40:42.114 [2024-10-08 21:08:10.826563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.372 [2024-10-08 21:08:10.936069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:40:42.372 [2024-10-08 21:08:10.959388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.372 [2024-10-08 21:08:11.063737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:40:42.372 [2024-10-08 21:08:11.064466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.630 [2024-10-08 21:08:11.167104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:40:42.630 Running I/O for 1 seconds... 00:40:42.630 Running I/O for 1 seconds... 00:40:42.888 Running I/O for 1 seconds... 00:40:43.145 Running I/O for 1 seconds... 00:40:43.712 9842.00 IOPS, 38.45 MiB/s [2024-10-08T19:08:12.475Z] 8809.00 IOPS, 34.41 MiB/s 00:40:43.712 Latency(us) 00:40:43.712 [2024-10-08T19:08:12.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:43.712 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:43.712 Nvme1n1 : 1.01 9885.52 38.62 0.00 0.00 12890.27 4247.70 16311.18 00:40:43.712 [2024-10-08T19:08:12.475Z] =================================================================================================================== 00:40:43.712 [2024-10-08T19:08:12.475Z] Total : 9885.52 38.62 0.00 0.00 12890.27 4247.70 16311.18 00:40:43.712 00:40:43.712 Latency(us) 00:40:43.712 [2024-10-08T19:08:12.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:43.712 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:43.712 Nvme1n1 : 1.01 8871.84 34.66 0.00 0.00 14364.59 4611.79 21651.15 00:40:43.712 [2024-10-08T19:08:12.475Z] =================================================================================================================== 00:40:43.712 [2024-10-08T19:08:12.475Z] Total : 8871.84 34.66 0.00 0.00 14364.59 4611.79 21651.15 00:40:43.969 200248.00 IOPS, 782.22 MiB/s 00:40:43.969 Latency(us) 00:40:43.969 [2024-10-08T19:08:12.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:43.969 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:43.969 Nvme1n1 : 1.00 199866.26 780.73 0.00 0.00 636.93 312.51 1868.99 00:40:43.969 [2024-10-08T19:08:12.732Z] =================================================================================================================== 00:40:43.969 [2024-10-08T19:08:12.732Z] Total : 199866.26 780.73 0.00 0.00 636.93 312.51 1868.99 00:40:43.969 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1898747 00:40:43.969 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1898749 00:40:44.227 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1898751 00:40:44.227 10194.00 IOPS, 39.82 MiB/s 00:40:44.227 Latency(us) 00:40:44.227 [2024-10-08T19:08:12.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.227 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:44.227 Nvme1n1 : 1.01 10272.03 40.13 0.00 0.00 12420.57 2682.12 19515.16 00:40:44.227 [2024-10-08T19:08:12.990Z] =================================================================================================================== 00:40:44.227 [2024-10-08T19:08:12.990Z] Total : 10272.03 40.13 0.00 0.00 12420.57 2682.12 19515.16 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.486 rmmod nvme_tcp 00:40:44.486 rmmod nvme_fabrics 00:40:44.486 rmmod nvme_keyring 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1898597 ']' 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1898597 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1898597 ']' 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1898597 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1898597 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1898597' 00:40:44.486 killing process with pid 1898597 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1898597 00:40:44.486 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1898597 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:45.056 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.965 00:40:46.965 real 0m9.348s 00:40:46.965 user 0m17.913s 00:40:46.965 sys 0m5.616s 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:46.965 ************************************ 00:40:46.965 END TEST nvmf_bdev_io_wait 00:40:46.965 ************************************ 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:46.965 ************************************ 00:40:46.965 START TEST nvmf_queue_depth 00:40:46.965 ************************************ 00:40:46.965 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:47.227 * Looking for test storage... 00:40:47.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:47.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.227 --rc genhtml_branch_coverage=1 00:40:47.227 --rc genhtml_function_coverage=1 00:40:47.227 --rc genhtml_legend=1 00:40:47.227 --rc geninfo_all_blocks=1 00:40:47.227 --rc geninfo_unexecuted_blocks=1 00:40:47.227 00:40:47.227 ' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:47.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.227 --rc genhtml_branch_coverage=1 00:40:47.227 --rc genhtml_function_coverage=1 00:40:47.227 --rc genhtml_legend=1 00:40:47.227 --rc geninfo_all_blocks=1 00:40:47.227 --rc geninfo_unexecuted_blocks=1 00:40:47.227 00:40:47.227 ' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:47.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.227 --rc genhtml_branch_coverage=1 00:40:47.227 --rc genhtml_function_coverage=1 00:40:47.227 --rc genhtml_legend=1 00:40:47.227 --rc geninfo_all_blocks=1 00:40:47.227 --rc geninfo_unexecuted_blocks=1 00:40:47.227 00:40:47.227 ' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:47.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.227 --rc genhtml_branch_coverage=1 00:40:47.227 --rc genhtml_function_coverage=1 00:40:47.227 --rc genhtml_legend=1 00:40:47.227 --rc geninfo_all_blocks=1 00:40:47.227 --rc geninfo_unexecuted_blocks=1 00:40:47.227 00:40:47.227 ' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.227 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:47.228 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:50.522 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:50.523 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:50.523 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:50.523 Found net devices under 0000:84:00.0: cvl_0_0 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:50.523 Found net devices under 0000:84:00.1: cvl_0_1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:50.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:50.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:40:50.523 00:40:50.523 --- 10.0.0.2 ping statistics --- 00:40:50.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.523 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:50.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:50.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:40:50.523 00:40:50.523 --- 10.0.0.1 ping statistics --- 00:40:50.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.523 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1901115 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1901115 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1901115 ']' 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.523 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:50.524 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.524 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:50.524 21:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.524 [2024-10-08 21:08:18.997734] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:50.524 [2024-10-08 21:08:18.999325] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:50.524 [2024-10-08 21:08:18.999403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.524 [2024-10-08 21:08:19.096700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.524 [2024-10-08 21:08:19.240716] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.524 [2024-10-08 21:08:19.240789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.524 [2024-10-08 21:08:19.240811] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:50.524 [2024-10-08 21:08:19.240827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:50.524 [2024-10-08 21:08:19.240842] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.524 [2024-10-08 21:08:19.241615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:50.784 [2024-10-08 21:08:19.374607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:50.784 [2024-10-08 21:08:19.375354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:50.784 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:50.784 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 [2024-10-08 21:08:19.446541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 Malloc0 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.785 [2024-10-08 21:08:19.538812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1901261 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1901261 /var/tmp/bdevperf.sock 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1901261 ']' 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:50.785 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:50.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:51.044 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:51.044 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:51.044 [2024-10-08 21:08:19.602107] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:51.044 [2024-10-08 21:08:19.602218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901261 ] 00:40:51.044 [2024-10-08 21:08:19.718761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.303 [2024-10-08 21:08:19.891107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.303 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:51.303 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:51.303 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:51.303 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.303 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:51.562 NVMe0n1 00:40:51.562 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.562 21:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:51.822 Running I/O for 10 seconds... 00:40:53.754 3079.00 IOPS, 12.03 MiB/s [2024-10-08T19:08:23.457Z] 3559.50 IOPS, 13.90 MiB/s [2024-10-08T19:08:24.838Z] 3452.67 IOPS, 13.49 MiB/s [2024-10-08T19:08:25.777Z] 3584.00 IOPS, 14.00 MiB/s [2024-10-08T19:08:26.716Z] 3684.80 IOPS, 14.39 MiB/s [2024-10-08T19:08:27.656Z] 3628.00 IOPS, 14.17 MiB/s [2024-10-08T19:08:28.594Z] 3657.29 IOPS, 14.29 MiB/s [2024-10-08T19:08:29.532Z] 3698.25 IOPS, 14.45 MiB/s [2024-10-08T19:08:30.910Z] 3679.78 IOPS, 14.37 MiB/s [2024-10-08T19:08:30.910Z] 3985.50 IOPS, 15.57 MiB/s 00:41:02.147 Latency(us) 00:41:02.147 [2024-10-08T19:08:30.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:02.147 Verification LBA range: start 0x0 length 0x4000 00:41:02.147 NVMe0n1 : 10.15 4027.39 15.73 0.00 0.00 251814.57 47574.28 163888.55 00:41:02.147 [2024-10-08T19:08:30.910Z] =================================================================================================================== 00:41:02.147 [2024-10-08T19:08:30.910Z] Total : 4027.39 15.73 0.00 0.00 251814.57 47574.28 163888.55 00:41:02.147 { 00:41:02.147 "results": [ 00:41:02.147 { 00:41:02.147 "job": "NVMe0n1", 00:41:02.147 "core_mask": "0x1", 00:41:02.147 "workload": "verify", 00:41:02.147 "status": "finished", 00:41:02.147 "verify_range": { 00:41:02.147 "start": 0, 00:41:02.147 "length": 16384 00:41:02.147 }, 00:41:02.147 "queue_depth": 1024, 00:41:02.147 "io_size": 4096, 00:41:02.147 "runtime": 10.148497, 00:41:02.147 "iops": 4027.3944013581518, 00:41:02.147 "mibps": 15.73200938030528, 00:41:02.147 "io_failed": 0, 00:41:02.147 "io_timeout": 0, 00:41:02.147 "avg_latency_us": 251814.56756370384, 00:41:02.147 "min_latency_us": 47574.281481481485, 00:41:02.148 "max_latency_us": 163888.54518518518 00:41:02.148 } 00:41:02.148 ], 00:41:02.148 "core_count": 1 00:41:02.148 } 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1901261 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1901261 ']' 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1901261 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901261 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901261' 00:41:02.148 killing process with pid 1901261 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1901261 00:41:02.148 Received shutdown signal, test time was about 10.000000 seconds 00:41:02.148 00:41:02.148 Latency(us) 00:41:02.148 [2024-10-08T19:08:30.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.148 [2024-10-08T19:08:30.911Z] =================================================================================================================== 00:41:02.148 [2024-10-08T19:08:30.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:02.148 21:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1901261 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:02.408 rmmod nvme_tcp 00:41:02.408 rmmod nvme_fabrics 00:41:02.408 rmmod nvme_keyring 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1901115 ']' 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1901115 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1901115 ']' 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1901115 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:02.408 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901115 00:41:02.668 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:02.668 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:02.668 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901115' 00:41:02.668 killing process with pid 1901115 00:41:02.668 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1901115 00:41:02.668 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1901115 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:02.972 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:05.514 00:41:05.514 real 0m17.990s 00:41:05.514 user 0m23.679s 00:41:05.514 sys 0m4.683s 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:05.514 ************************************ 00:41:05.514 END TEST nvmf_queue_depth 00:41:05.514 ************************************ 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:05.514 ************************************ 00:41:05.514 START TEST nvmf_target_multipath 00:41:05.514 ************************************ 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:05.514 * Looking for test storage... 00:41:05.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.514 --rc genhtml_branch_coverage=1 00:41:05.514 --rc genhtml_function_coverage=1 00:41:05.514 --rc genhtml_legend=1 00:41:05.514 --rc geninfo_all_blocks=1 00:41:05.514 --rc geninfo_unexecuted_blocks=1 00:41:05.514 00:41:05.514 ' 00:41:05.514 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.514 --rc genhtml_branch_coverage=1 00:41:05.514 --rc genhtml_function_coverage=1 00:41:05.514 --rc genhtml_legend=1 00:41:05.515 --rc geninfo_all_blocks=1 00:41:05.515 --rc geninfo_unexecuted_blocks=1 00:41:05.515 00:41:05.515 ' 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:05.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.515 --rc genhtml_branch_coverage=1 00:41:05.515 --rc genhtml_function_coverage=1 00:41:05.515 --rc genhtml_legend=1 00:41:05.515 --rc geninfo_all_blocks=1 00:41:05.515 --rc geninfo_unexecuted_blocks=1 00:41:05.515 00:41:05.515 ' 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:05.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.515 --rc genhtml_branch_coverage=1 00:41:05.515 --rc genhtml_function_coverage=1 00:41:05.515 --rc genhtml_legend=1 00:41:05.515 --rc geninfo_all_blocks=1 00:41:05.515 --rc geninfo_unexecuted_blocks=1 00:41:05.515 00:41:05.515 ' 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:05.515 21:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:05.515 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:08.805 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:08.806 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:08.806 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:08.806 Found net devices under 0000:84:00.0: cvl_0_0 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:08.806 Found net devices under 0000:84:00.1: cvl_0_1 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:08.806 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:08.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:08.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:41:08.806 00:41:08.806 --- 10.0.0.2 ping statistics --- 00:41:08.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.806 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:08.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:08.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:41:08.806 00:41:08.806 --- 10.0.0.1 ping statistics --- 00:41:08.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.806 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:08.806 only one NIC for nvmf test 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:08.806 rmmod nvme_tcp 00:41:08.806 rmmod nvme_fabrics 00:41:08.806 rmmod nvme_keyring 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:08.806 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.807 21:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:10.711 00:41:10.711 real 0m5.591s 00:41:10.711 user 0m1.118s 00:41:10.711 sys 0m2.503s 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:10.711 ************************************ 00:41:10.711 END TEST nvmf_target_multipath 00:41:10.711 ************************************ 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:10.711 ************************************ 00:41:10.711 START TEST nvmf_zcopy 00:41:10.711 ************************************ 00:41:10.711 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:10.971 * Looking for test storage... 00:41:10.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.971 --rc genhtml_branch_coverage=1 00:41:10.971 --rc genhtml_function_coverage=1 00:41:10.971 --rc genhtml_legend=1 00:41:10.971 --rc geninfo_all_blocks=1 00:41:10.971 --rc geninfo_unexecuted_blocks=1 00:41:10.971 00:41:10.971 ' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.971 --rc genhtml_branch_coverage=1 00:41:10.971 --rc genhtml_function_coverage=1 00:41:10.971 --rc genhtml_legend=1 00:41:10.971 --rc geninfo_all_blocks=1 00:41:10.971 --rc geninfo_unexecuted_blocks=1 00:41:10.971 00:41:10.971 ' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.971 --rc genhtml_branch_coverage=1 00:41:10.971 --rc genhtml_function_coverage=1 00:41:10.971 --rc genhtml_legend=1 00:41:10.971 --rc geninfo_all_blocks=1 00:41:10.971 --rc geninfo_unexecuted_blocks=1 00:41:10.971 00:41:10.971 ' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.971 --rc genhtml_branch_coverage=1 00:41:10.971 --rc genhtml_function_coverage=1 00:41:10.971 --rc genhtml_legend=1 00:41:10.971 --rc geninfo_all_blocks=1 00:41:10.971 --rc geninfo_unexecuted_blocks=1 00:41:10.971 00:41:10.971 ' 00:41:10.971 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:11.232 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:11.233 21:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:14.530 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:14.530 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:14.530 Found net devices under 0000:84:00.0: cvl_0_0 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.530 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:14.531 Found net devices under 0000:84:00.1: cvl_0_1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:14.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:14.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:41:14.531 00:41:14.531 --- 10.0.0.2 ping statistics --- 00:41:14.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.531 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:14.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:14.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:41:14.531 00:41:14.531 --- 10.0.0.1 ping statistics --- 00:41:14.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.531 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1906602 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1906602 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1906602 ']' 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:14.531 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:14.531 [2024-10-08 21:08:42.980723] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:14.531 [2024-10-08 21:08:42.982223] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:14.531 [2024-10-08 21:08:42.982302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:14.531 [2024-10-08 21:08:43.099088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.791 [2024-10-08 21:08:43.326897] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:14.791 [2024-10-08 21:08:43.327004] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:14.791 [2024-10-08 21:08:43.327040] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:14.791 [2024-10-08 21:08:43.327072] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:14.791 [2024-10-08 21:08:43.327098] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:14.791 [2024-10-08 21:08:43.328286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.791 [2024-10-08 21:08:43.503375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:14.791 [2024-10-08 21:08:43.503919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 [2024-10-08 21:08:43.641551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 [2024-10-08 21:08:43.669878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 malloc0 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:15.052 { 00:41:15.052 "params": { 00:41:15.052 "name": "Nvme$subsystem", 00:41:15.052 "trtype": "$TEST_TRANSPORT", 00:41:15.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.052 "adrfam": "ipv4", 00:41:15.052 "trsvcid": "$NVMF_PORT", 00:41:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.052 "hdgst": ${hdgst:-false}, 00:41:15.052 "ddgst": ${ddgst:-false} 00:41:15.052 }, 00:41:15.052 "method": "bdev_nvme_attach_controller" 00:41:15.052 } 00:41:15.052 EOF 00:41:15.052 )") 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:41:15.052 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:15.052 "params": { 00:41:15.052 "name": "Nvme1", 00:41:15.052 "trtype": "tcp", 00:41:15.052 "traddr": "10.0.0.2", 00:41:15.052 "adrfam": "ipv4", 00:41:15.052 "trsvcid": "4420", 00:41:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:15.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:15.052 "hdgst": false, 00:41:15.052 "ddgst": false 00:41:15.052 }, 00:41:15.052 "method": "bdev_nvme_attach_controller" 00:41:15.052 }' 00:41:15.312 [2024-10-08 21:08:43.837428] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:15.312 [2024-10-08 21:08:43.837603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906748 ] 00:41:15.312 [2024-10-08 21:08:43.976608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.573 [2024-10-08 21:08:44.192161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.834 Running I/O for 10 seconds... 00:41:17.719 2296.00 IOPS, 17.94 MiB/s [2024-10-08T19:08:47.866Z] 2396.50 IOPS, 18.72 MiB/s [2024-10-08T19:08:48.809Z] 2444.67 IOPS, 19.10 MiB/s [2024-10-08T19:08:49.751Z] 2439.75 IOPS, 19.06 MiB/s [2024-10-08T19:08:50.692Z] 2460.40 IOPS, 19.22 MiB/s [2024-10-08T19:08:51.632Z] 2448.17 IOPS, 19.13 MiB/s [2024-10-08T19:08:52.609Z] 2439.57 IOPS, 19.06 MiB/s [2024-10-08T19:08:53.545Z] 2563.38 IOPS, 20.03 MiB/s [2024-10-08T19:08:54.925Z] 2756.11 IOPS, 21.53 MiB/s [2024-10-08T19:08:54.925Z] 2740.80 IOPS, 21.41 MiB/s 00:41:26.162 Latency(us) 00:41:26.162 [2024-10-08T19:08:54.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.162 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:26.162 Verification LBA range: start 0x0 length 0x1000 00:41:26.162 Nvme1n1 : 10.04 2741.76 21.42 0.00 0.00 46511.32 6043.88 67963.26 00:41:26.162 [2024-10-08T19:08:54.926Z] =================================================================================================================== 00:41:26.163 [2024-10-08T19:08:54.926Z] Total : 2741.76 21.42 0.00 0.00 46511.32 6043.88 67963.26 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1907932 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:26.422 { 00:41:26.422 "params": { 00:41:26.422 "name": "Nvme$subsystem", 00:41:26.422 "trtype": "$TEST_TRANSPORT", 00:41:26.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:26.422 "adrfam": "ipv4", 00:41:26.422 "trsvcid": "$NVMF_PORT", 00:41:26.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:26.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:26.422 "hdgst": ${hdgst:-false}, 00:41:26.422 "ddgst": ${ddgst:-false} 00:41:26.422 }, 00:41:26.422 "method": "bdev_nvme_attach_controller" 00:41:26.422 } 00:41:26.422 EOF 00:41:26.422 )") 00:41:26.422 [2024-10-08 21:08:54.941423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:54.941513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:41:26.422 21:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:26.422 "params": { 00:41:26.422 "name": "Nvme1", 00:41:26.422 "trtype": "tcp", 00:41:26.422 "traddr": "10.0.0.2", 00:41:26.422 "adrfam": "ipv4", 00:41:26.422 "trsvcid": "4420", 00:41:26.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:26.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:26.422 "hdgst": false, 00:41:26.422 "ddgst": false 00:41:26.422 }, 00:41:26.422 "method": "bdev_nvme_attach_controller" 00:41:26.422 }' 00:41:26.422 [2024-10-08 21:08:54.953287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:54.953347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:54.965282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:54.965339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:54.977280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:54.977334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:54.989279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:54.989333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.001280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.001335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.002968] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:26.422 [2024-10-08 21:08:55.003086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907932 ] 00:41:26.422 [2024-10-08 21:08:55.013278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.013333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.025280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.025333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.037279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.037332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.049280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.049332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.061279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.061349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.073281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.073333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.085279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.085333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.097281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.097335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.109279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.109333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.121282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.121334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.131773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.422 [2024-10-08 21:08:55.133280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.133333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.145347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.145421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.157318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.157383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.169279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.169333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.422 [2024-10-08 21:08:55.181279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.422 [2024-10-08 21:08:55.181332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.193281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.193335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.205281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.205334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.217280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.217333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.229280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.229333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.241326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.241394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.253321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.253391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.265290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.265345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.277280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.277334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.289279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.289332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.301280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.301344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.313286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.313343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.325281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.325334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.330887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.683 [2024-10-08 21:08:55.337279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.337333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.349297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.349367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.361345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.361416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.373343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.373418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.385341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.385415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.397359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.397440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.409334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.409409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.421338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.421411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.683 [2024-10-08 21:08:55.433279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.683 [2024-10-08 21:08:55.433333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.445354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.445428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.457349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.457427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.469339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.469413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.481278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.481331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.493283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.493336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.505313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.505395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.517299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.517360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.529301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.529365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.541324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.541386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.553289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.553350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.565314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.565381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.577300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.577363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 Running I/O for 5 seconds... 00:41:26.943 [2024-10-08 21:08:55.601390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.601458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.624629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.624720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.647200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.647267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.670408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.670474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.943 [2024-10-08 21:08:55.694439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.943 [2024-10-08 21:08:55.694506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.202 [2024-10-08 21:08:55.715547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.202 [2024-10-08 21:08:55.715613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.202 [2024-10-08 21:08:55.737971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.202 [2024-10-08 21:08:55.738038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.202 [2024-10-08 21:08:55.760757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.760787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.782983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.783048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.804624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.804711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.829270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.829336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.851625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.851710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.874349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.874442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.897131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.897197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.918726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.918757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.941054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.941125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.203 [2024-10-08 21:08:55.962583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.203 [2024-10-08 21:08:55.962666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:55.987768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:55.987799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.010617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.010709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.033794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.033824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.055683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.055761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.078047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.078113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.100391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.100457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.121920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.121999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.142888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.142922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.166357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.166423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.187804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.187835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.461 [2024-10-08 21:08:56.208134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.461 [2024-10-08 21:08:56.208204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.226018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.226047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.255268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.255334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.279684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.279750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.303436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.303520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.325950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.326016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.351470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.351538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.374538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.374605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.396146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.396212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.418774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.418804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.440514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.440581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.720 [2024-10-08 21:08:56.462789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.720 [2024-10-08 21:08:56.462819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.484740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.484771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.504376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.504405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.525573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.525639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.546284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.546351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.570038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.570104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 5655.00 IOPS, 44.18 MiB/s [2024-10-08T19:08:56.742Z] [2024-10-08 21:08:56.591491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.591558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.612759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.612788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.634686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.634736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.979 [2024-10-08 21:08:56.659769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.979 [2024-10-08 21:08:56.659799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.980 [2024-10-08 21:08:56.682265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.980 [2024-10-08 21:08:56.682333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.980 [2024-10-08 21:08:56.703840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.980 [2024-10-08 21:08:56.703870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.980 [2024-10-08 21:08:56.725818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.980 [2024-10-08 21:08:56.725848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.238 [2024-10-08 21:08:56.748587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.238 [2024-10-08 21:08:56.748672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.238 [2024-10-08 21:08:56.771631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.238 [2024-10-08 21:08:56.771740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.238 [2024-10-08 21:08:56.793602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.238 [2024-10-08 21:08:56.793718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.238 [2024-10-08 21:08:56.814376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.238 [2024-10-08 21:08:56.814442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.238 [2024-10-08 21:08:56.843889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.238 [2024-10-08 21:08:56.843919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.864048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.864114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.890058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.890124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.911895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.911925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.935313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.935379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.957304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.957369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.979282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:56.979348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.239 [2024-10-08 21:08:56.999997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.239 [2024-10-08 21:08:57.000063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.022241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.022306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.046377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.046443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.067781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.067811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.088625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.088709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.108486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.108553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.130685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.130742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.153727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.153792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.175785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.175851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.199300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.199366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.221023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.221095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.498 [2024-10-08 21:08:57.242057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.498 [2024-10-08 21:08:57.242124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.263600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.263701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.285745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.285775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.307270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.307340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.329395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.329461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.351966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.352033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.374626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.374723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.396672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.396743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.757 [2024-10-08 21:08:57.418855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.757 [2024-10-08 21:08:57.418885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.758 [2024-10-08 21:08:57.440844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.758 [2024-10-08 21:08:57.440874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.758 [2024-10-08 21:08:57.463118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.758 [2024-10-08 21:08:57.463185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.758 [2024-10-08 21:08:57.484746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.758 [2024-10-08 21:08:57.484785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.758 [2024-10-08 21:08:57.505976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:28.758 [2024-10-08 21:08:57.506042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.527404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.527470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.549732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.549775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.570826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.570856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 5712.50 IOPS, 44.63 MiB/s [2024-10-08T19:08:57.780Z] [2024-10-08 21:08:57.591432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.591498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.611979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.612045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.635206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.635273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.656969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.657037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.679551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.679620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.702432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.702499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.724895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.724969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.745845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.745875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.017 [2024-10-08 21:08:57.767532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.017 [2024-10-08 21:08:57.767599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.788334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.788405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.807638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.807732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.828839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.828870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.850432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.850498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.871251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.871317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.894741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.894770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.925542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.925608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.944053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.944119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.969189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.969272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:57.991173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.276 [2024-10-08 21:08:57.991239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.276 [2024-10-08 21:08:58.012427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.277 [2024-10-08 21:08:58.012493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.277 [2024-10-08 21:08:58.036255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.277 [2024-10-08 21:08:58.036321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.055294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.055360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.079870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.079900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.103851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.103881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.126847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.126877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.148056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.148122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.167854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.167883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.191800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.191830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.213167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.213254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.235855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.235885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.260192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.260260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.533 [2024-10-08 21:08:58.281577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.533 [2024-10-08 21:08:58.281642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.301840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.301870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.324254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.324320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.347718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.347748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.368745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.368775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.390862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.390901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.412574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.412640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.434707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.434756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.456798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.456827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.480581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.480648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.505208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.505274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.528435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.528502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:29.791 [2024-10-08 21:08:58.552282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:29.791 [2024-10-08 21:08:58.552350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.573926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.573994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 5721.33 IOPS, 44.70 MiB/s [2024-10-08T19:08:58.812Z] [2024-10-08 21:08:58.595872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.595928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.620008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.620074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.641515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.641583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.663848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.663878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.686739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.686769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.707904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.707933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.729596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.729678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.751321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.049 [2024-10-08 21:08:58.751387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.049 [2024-10-08 21:08:58.774209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.050 [2024-10-08 21:08:58.774275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.050 [2024-10-08 21:08:58.795521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.050 [2024-10-08 21:08:58.795588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.813632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.813676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.832737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.832767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.855227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.855293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.878123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.878200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.899621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.899721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.921421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.921488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.942429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.942502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.965872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.965912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:58.987959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:58.988025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:59.008019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:59.008085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:59.029070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:59.029141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:59.049979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:59.050044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.308 [2024-10-08 21:08:59.069736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.308 [2024-10-08 21:08:59.069766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.091529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.091596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.111864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.111894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.135266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.135333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.156134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.156201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.178134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.178199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.199799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.199828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.220871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.220921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.242636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.242712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.264217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.264284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.286302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.286368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.308499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.308564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.567 [2024-10-08 21:08:59.329785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.567 [2024-10-08 21:08:59.329815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.351373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.351441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.373837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.373867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.396733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.396763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.418343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.418409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.440488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.440556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.461881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.461945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.484642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.484716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.507801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.507830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.529856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.529886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.551885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.551945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.827 [2024-10-08 21:08:59.574729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:30.827 [2024-10-08 21:08:59.574759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 5756.00 IOPS, 44.97 MiB/s [2024-10-08T19:08:59.848Z] [2024-10-08 21:08:59.594970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.595037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.617010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.617103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.638259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.638327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.659812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.659843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.681022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.681094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.702060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.702133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.725775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.725804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.747358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.747425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.768826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.768856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.789616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.789710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.811154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.811220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.085 [2024-10-08 21:08:59.833874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.085 [2024-10-08 21:08:59.833932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.854041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.854107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.876946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.877011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.899048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.899114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.920915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.920992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.942311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.942376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.963193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.963260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:08:59.986281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:08:59.986348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.009678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.009739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.022451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.022499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.035450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.035487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.052917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.052981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.072829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.072861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.345 [2024-10-08 21:09:00.095455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.345 [2024-10-08 21:09:00.095524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.117680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.117739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.139896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.139965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.161063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.161136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.183741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.183772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.204043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.204118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.224930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.224999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.245878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.245935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.268030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.268100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.291431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.291500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.313207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.313275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.335627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.335714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.604 [2024-10-08 21:09:00.356532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.604 [2024-10-08 21:09:00.356600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.377810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.377840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.398257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.398326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.422185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.422287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.445014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.445106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.466410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.466479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.487364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.487431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.509507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.509574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.531218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.531285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.554073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.554141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.576716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.576748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 5798.00 IOPS, 45.30 MiB/s [2024-10-08T19:09:00.627Z] [2024-10-08 21:09:00.598339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.598406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 [2024-10-08 21:09:00.618230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.618297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:31.864 00:41:31.864 Latency(us) 00:41:31.864 [2024-10-08T19:09:00.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.864 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:31.864 Nvme1n1 : 5.03 5797.73 45.29 0.00 0.00 22031.24 6068.15 37282.70 00:41:31.864 [2024-10-08T19:09:00.627Z] =================================================================================================================== 00:41:31.864 [2024-10-08T19:09:00.627Z] Total : 5797.73 45.29 0.00 0.00 22031.24 6068.15 37282.70 00:41:31.864 [2024-10-08 21:09:00.625177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:31.864 [2024-10-08 21:09:00.625205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.123 [2024-10-08 21:09:00.633569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.123 [2024-10-08 21:09:00.633633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.641171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.641197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.649174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.649201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.657235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.657283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.665247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.665302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.673239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.673294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.681244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.681298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.689231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.689283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.697235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.697286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.705246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.705297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.713245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.713302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.721258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.721315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.729245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.729302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.737245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.737300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.745244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.745301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.753241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.753296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.761241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.761295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.769239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.769293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.777240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.777295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.785174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.785201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.793301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.793361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.801173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.801199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.809293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.809352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.817287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.817346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.825304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.825380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.833174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.833200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.841354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.841431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.849252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.849305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.857248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.857298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.865262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.865316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.873235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.873288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.124 [2024-10-08 21:09:00.885292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.124 [2024-10-08 21:09:00.885345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.893280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.893332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.901294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.901346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.909299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.909351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.917381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.917456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.925224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.925271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.933225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.933270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.941169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.941228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.949294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.949348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.384 [2024-10-08 21:09:00.957292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.384 [2024-10-08 21:09:00.957345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.385 [2024-10-08 21:09:00.965166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.385 [2024-10-08 21:09:00.965190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.385 [2024-10-08 21:09:00.973280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:32.385 [2024-10-08 21:09:00.973346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:32.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1907932) - No such process 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1907932 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:32.385 delay0 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.385 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:32.385 21:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.385 21:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:32.385 [2024-10-08 21:09:01.071423] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:40.583 Initializing NVMe Controllers 00:41:40.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:40.583 Initialization complete. Launching workers. 00:41:40.583 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 223, failed: 10736 00:41:40.583 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10813, failed to submit 146 00:41:40.583 success 10747, unsuccessful 66, failed 0 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:40.583 rmmod nvme_tcp 00:41:40.583 rmmod nvme_fabrics 00:41:40.583 rmmod nvme_keyring 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1906602 ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1906602 ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1906602' 00:41:40.583 killing process with pid 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1906602 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:40.583 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:40.584 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:40.584 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:40.584 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:42.495 00:41:42.495 real 0m31.470s 00:41:42.495 user 0m42.558s 00:41:42.495 sys 0m12.222s 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:42.495 ************************************ 00:41:42.495 END TEST nvmf_zcopy 00:41:42.495 ************************************ 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:42.495 ************************************ 00:41:42.495 START TEST nvmf_nmic 00:41:42.495 ************************************ 00:41:42.495 21:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:42.495 * Looking for test storage... 00:41:42.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:42.495 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:42.754 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:42.754 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:42.754 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:42.754 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:42.754 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.755 --rc genhtml_branch_coverage=1 00:41:42.755 --rc genhtml_function_coverage=1 00:41:42.755 --rc genhtml_legend=1 00:41:42.755 --rc geninfo_all_blocks=1 00:41:42.755 --rc geninfo_unexecuted_blocks=1 00:41:42.755 00:41:42.755 ' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.755 --rc genhtml_branch_coverage=1 00:41:42.755 --rc genhtml_function_coverage=1 00:41:42.755 --rc genhtml_legend=1 00:41:42.755 --rc geninfo_all_blocks=1 00:41:42.755 --rc geninfo_unexecuted_blocks=1 00:41:42.755 00:41:42.755 ' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.755 --rc genhtml_branch_coverage=1 00:41:42.755 --rc genhtml_function_coverage=1 00:41:42.755 --rc genhtml_legend=1 00:41:42.755 --rc geninfo_all_blocks=1 00:41:42.755 --rc geninfo_unexecuted_blocks=1 00:41:42.755 00:41:42.755 ' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.755 --rc genhtml_branch_coverage=1 00:41:42.755 --rc genhtml_function_coverage=1 00:41:42.755 --rc genhtml_legend=1 00:41:42.755 --rc geninfo_all_blocks=1 00:41:42.755 --rc geninfo_unexecuted_blocks=1 00:41:42.755 00:41:42.755 ' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:42.755 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:42.756 21:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:46.048 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:46.049 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:46.049 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:46.049 Found net devices under 0000:84:00.0: cvl_0_0 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:46.049 Found net devices under 0000:84:00.1: cvl_0_1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:46.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:41:46.049 00:41:46.049 --- 10.0.0.2 ping statistics --- 00:41:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.049 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:46.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:41:46.049 00:41:46.049 --- 10.0.0.1 ping statistics --- 00:41:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.049 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1912200 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1912200 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1912200 ']' 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.049 21:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.049 [2024-10-08 21:09:14.506116] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:46.049 [2024-10-08 21:09:14.507406] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:46.050 [2024-10-08 21:09:14.507475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.050 [2024-10-08 21:09:14.628935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.309 [2024-10-08 21:09:14.860957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.309 [2024-10-08 21:09:14.861067] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.309 [2024-10-08 21:09:14.861104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.309 [2024-10-08 21:09:14.861142] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.309 [2024-10-08 21:09:14.861153] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.309 [2024-10-08 21:09:14.864384] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.309 [2024-10-08 21:09:14.864482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:41:46.309 [2024-10-08 21:09:14.864573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:41:46.309 [2024-10-08 21:09:14.864576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.309 [2024-10-08 21:09:15.051760] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:46.309 [2024-10-08 21:09:15.052304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:46.309 [2024-10-08 21:09:15.052599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:46.309 [2024-10-08 21:09:15.053468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:46.309 [2024-10-08 21:09:15.053850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:46.567 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 [2024-10-08 21:09:15.165833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 Malloc0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 [2024-10-08 21:09:15.249716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:46.568 test case1: single bdev can't be used in multiple subsystems 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 [2024-10-08 21:09:15.273418] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:46.568 [2024-10-08 21:09:15.273453] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:46.568 [2024-10-08 21:09:15.273470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.568 request: 00:41:46.568 { 00:41:46.568 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:46.568 "namespace": { 00:41:46.568 "bdev_name": "Malloc0", 00:41:46.568 "no_auto_visible": false 00:41:46.568 }, 00:41:46.568 "method": "nvmf_subsystem_add_ns", 00:41:46.568 "req_id": 1 00:41:46.568 } 00:41:46.568 Got JSON-RPC error response 00:41:46.568 response: 00:41:46.568 { 00:41:46.568 "code": -32602, 00:41:46.568 "message": "Invalid parameters" 00:41:46.568 } 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:46.568 Adding namespace failed - expected result. 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:46.568 test case2: host connect to nvmf target in multiple paths 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.568 [2024-10-08 21:09:15.281521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.568 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:46.826 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:47.084 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:47.084 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:41:47.084 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:47.084 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:41:47.084 21:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:41:48.979 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:48.979 [global] 00:41:48.979 thread=1 00:41:48.979 invalidate=1 00:41:48.979 rw=write 00:41:48.979 time_based=1 00:41:48.979 runtime=1 00:41:48.979 ioengine=libaio 00:41:48.979 direct=1 00:41:48.979 bs=4096 00:41:48.979 iodepth=1 00:41:48.979 norandommap=0 00:41:48.979 numjobs=1 00:41:48.979 00:41:48.979 verify_dump=1 00:41:48.979 verify_backlog=512 00:41:48.979 verify_state_save=0 00:41:48.979 do_verify=1 00:41:48.979 verify=crc32c-intel 00:41:48.979 [job0] 00:41:48.979 filename=/dev/nvme0n1 00:41:48.979 Could not set queue depth (nvme0n1) 00:41:49.236 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:49.236 fio-3.35 00:41:49.236 Starting 1 thread 00:41:50.610 00:41:50.610 job0: (groupid=0, jobs=1): err= 0: pid=1912698: Tue Oct 8 21:09:18 2024 00:41:50.610 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:41:50.610 slat (nsec): min=5232, max=51687, avg=11813.56, stdev=6427.74 00:41:50.610 clat (usec): min=196, max=586, avg=255.83, stdev=44.96 00:41:50.610 lat (usec): min=202, max=604, avg=267.65, stdev=45.93 00:41:50.610 clat percentiles (usec): 00:41:50.610 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:41:50.610 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 249], 00:41:50.610 | 70.00th=[ 277], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 334], 00:41:50.610 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 537], 00:41:50.610 | 99.99th=[ 586] 00:41:50.610 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(9.91MiB/1000msec); 0 zone resets 00:41:50.610 slat (usec): min=6, max=989, avg= 9.69, stdev=19.82 00:41:50.610 clat (usec): min=140, max=383, avg=163.17, stdev=16.18 00:41:50.610 lat (usec): min=148, max=1227, avg=172.86, stdev=27.14 00:41:50.610 clat percentiles (usec): 00:41:50.610 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:41:50.610 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:41:50.610 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:41:50.610 | 99.00th=[ 215], 99.50th=[ 237], 99.90th=[ 326], 99.95th=[ 359], 00:41:50.610 | 99.99th=[ 383] 00:41:50.610 bw ( KiB/s): min=10624, max=10624, per=100.00%, avg=10624.00, stdev= 0.00, samples=1 00:41:50.610 iops : min= 2656, max= 2656, avg=2656.00, stdev= 0.00, samples=1 00:41:50.610 lat (usec) : 250=81.98%, 500=17.97%, 750=0.04% 00:41:50.610 cpu : usr=2.20%, sys=5.50%, ctx=4588, majf=0, minf=1 00:41:50.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:50.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.610 issued rwts: total=2048,2537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:50.610 00:41:50.610 Run status group 0 (all jobs): 00:41:50.610 READ: bw=8192KiB/s (8389kB/s), 8192KiB/s-8192KiB/s (8389kB/s-8389kB/s), io=8192KiB (8389kB), run=1000-1000msec 00:41:50.610 WRITE: bw=9.91MiB/s (10.4MB/s), 9.91MiB/s-9.91MiB/s (10.4MB/s-10.4MB/s), io=9.91MiB (10.4MB), run=1000-1000msec 00:41:50.610 00:41:50.610 Disk stats (read/write): 00:41:50.610 nvme0n1: ios=2043/2048, merge=0/0, ticks=649/333, in_queue=982, util=98.70% 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:50.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:50.610 rmmod nvme_tcp 00:41:50.610 rmmod nvme_fabrics 00:41:50.610 rmmod nvme_keyring 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1912200 ']' 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1912200 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1912200 ']' 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1912200 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912200 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912200' 00:41:50.610 killing process with pid 1912200 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1912200 00:41:50.610 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1912200 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:41:51.179 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:51.180 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:53.718 00:41:53.718 real 0m10.885s 00:41:53.718 user 0m17.939s 00:41:53.718 sys 0m4.580s 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:53.718 ************************************ 00:41:53.718 END TEST nvmf_nmic 00:41:53.718 ************************************ 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:53.718 ************************************ 00:41:53.718 START TEST nvmf_fio_target 00:41:53.718 ************************************ 00:41:53.718 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:53.718 * Looking for test storage... 00:41:53.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:53.718 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.719 --rc genhtml_branch_coverage=1 00:41:53.719 --rc genhtml_function_coverage=1 00:41:53.719 --rc genhtml_legend=1 00:41:53.719 --rc geninfo_all_blocks=1 00:41:53.719 --rc geninfo_unexecuted_blocks=1 00:41:53.719 00:41:53.719 ' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.719 --rc genhtml_branch_coverage=1 00:41:53.719 --rc genhtml_function_coverage=1 00:41:53.719 --rc genhtml_legend=1 00:41:53.719 --rc geninfo_all_blocks=1 00:41:53.719 --rc geninfo_unexecuted_blocks=1 00:41:53.719 00:41:53.719 ' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.719 --rc genhtml_branch_coverage=1 00:41:53.719 --rc genhtml_function_coverage=1 00:41:53.719 --rc genhtml_legend=1 00:41:53.719 --rc geninfo_all_blocks=1 00:41:53.719 --rc geninfo_unexecuted_blocks=1 00:41:53.719 00:41:53.719 ' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.719 --rc genhtml_branch_coverage=1 00:41:53.719 --rc genhtml_function_coverage=1 00:41:53.719 --rc genhtml_legend=1 00:41:53.719 --rc geninfo_all_blocks=1 00:41:53.719 --rc geninfo_unexecuted_blocks=1 00:41:53.719 00:41:53.719 ' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:53.719 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:56.252 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:56.253 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:56.253 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:56.253 Found net devices under 0000:84:00.0: cvl_0_0 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:56.253 Found net devices under 0000:84:00.1: cvl_0_1 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:56.253 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:56.253 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:56.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:56.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:41:56.512 00:41:56.512 --- 10.0.0.2 ping statistics --- 00:41:56.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.512 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:56.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:56.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:41:56.512 00:41:56.512 --- 10.0.0.1 ping statistics --- 00:41:56.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.512 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1914919 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1914919 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1914919 ']' 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:56.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:56.512 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:56.512 [2024-10-08 21:09:25.198965] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:56.512 [2024-10-08 21:09:25.200289] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:56.512 [2024-10-08 21:09:25.200351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:56.773 [2024-10-08 21:09:25.276662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:56.773 [2024-10-08 21:09:25.403256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:56.773 [2024-10-08 21:09:25.403327] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:56.773 [2024-10-08 21:09:25.403344] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:56.773 [2024-10-08 21:09:25.403357] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:56.773 [2024-10-08 21:09:25.403370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:56.773 [2024-10-08 21:09:25.405277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.773 [2024-10-08 21:09:25.405334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:41:56.773 [2024-10-08 21:09:25.407672] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:41:56.773 [2024-10-08 21:09:25.407679] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.033 [2024-10-08 21:09:25.568156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:57.033 [2024-10-08 21:09:25.568529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:57.033 [2024-10-08 21:09:25.568844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:57.033 [2024-10-08 21:09:25.569803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:57.033 [2024-10-08 21:09:25.570121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:57.033 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:57.603 [2024-10-08 21:09:26.304724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:57.603 21:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:58.542 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:58.542 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:59.112 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:59.112 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:00.050 21:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:00.050 21:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:00.619 21:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:00.619 21:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:01.188 21:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:02.126 21:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:02.126 21:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:02.695 21:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:02.695 21:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:03.332 21:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:03.332 21:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:04.272 21:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:04.841 21:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:04.841 21:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:05.409 21:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:05.409 21:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:05.666 21:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:06.230 [2024-10-08 21:09:34.928575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.231 21:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:42:07.162 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:42:07.419 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:42:07.676 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:42:10.200 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:10.200 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:10.200 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:10.201 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:42:10.201 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:10.201 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:42:10.201 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:10.201 [global] 00:42:10.201 thread=1 00:42:10.201 invalidate=1 00:42:10.201 rw=write 00:42:10.201 time_based=1 00:42:10.201 runtime=1 00:42:10.201 ioengine=libaio 00:42:10.201 direct=1 00:42:10.201 bs=4096 00:42:10.201 iodepth=1 00:42:10.201 norandommap=0 00:42:10.201 numjobs=1 00:42:10.201 00:42:10.201 verify_dump=1 00:42:10.201 verify_backlog=512 00:42:10.201 verify_state_save=0 00:42:10.201 do_verify=1 00:42:10.201 verify=crc32c-intel 00:42:10.201 [job0] 00:42:10.201 filename=/dev/nvme0n1 00:42:10.201 [job1] 00:42:10.201 filename=/dev/nvme0n2 00:42:10.201 [job2] 00:42:10.201 filename=/dev/nvme0n3 00:42:10.201 [job3] 00:42:10.201 filename=/dev/nvme0n4 00:42:10.201 Could not set queue depth (nvme0n1) 00:42:10.201 Could not set queue depth (nvme0n2) 00:42:10.201 Could not set queue depth (nvme0n3) 00:42:10.201 Could not set queue depth (nvme0n4) 00:42:10.201 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:10.201 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:10.201 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:10.201 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:10.201 fio-3.35 00:42:10.201 Starting 4 threads 00:42:11.572 00:42:11.572 job0: (groupid=0, jobs=1): err= 0: pid=1916509: Tue Oct 8 21:09:39 2024 00:42:11.572 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:42:11.572 slat (nsec): min=4662, max=35432, avg=10186.20, stdev=4887.29 00:42:11.572 clat (usec): min=213, max=1294, avg=271.33, stdev=59.97 00:42:11.572 lat (usec): min=218, max=1300, avg=281.52, stdev=61.48 00:42:11.572 clat percentiles (usec): 00:42:11.572 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:42:11.572 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 262], 60.00th=[ 273], 00:42:11.572 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 371], 00:42:11.572 | 99.00th=[ 420], 99.50th=[ 482], 99.90th=[ 996], 99.95th=[ 1172], 00:42:11.572 | 99.99th=[ 1303] 00:42:11.572 write: IOPS=2097, BW=8392KiB/s (8593kB/s)(8400KiB/1001msec); 0 zone resets 00:42:11.572 slat (nsec): min=6078, max=30532, avg=8876.40, stdev=3798.00 00:42:11.572 clat (usec): min=142, max=444, avg=187.16, stdev=37.55 00:42:11.572 lat (usec): min=149, max=470, avg=196.04, stdev=38.64 00:42:11.572 clat percentiles (usec): 00:42:11.572 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:42:11.572 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 180], 00:42:11.572 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 245], 95.00th=[ 277], 00:42:11.572 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 322], 00:42:11.572 | 99.99th=[ 445] 00:42:11.572 bw ( KiB/s): min= 8192, max= 8192, per=33.58%, avg=8192.00, stdev= 0.00, samples=1 00:42:11.572 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:11.572 lat (usec) : 250=67.89%, 500=31.94%, 750=0.05%, 1000=0.07% 00:42:11.572 lat (msec) : 2=0.05% 00:42:11.572 cpu : usr=2.20%, sys=4.00%, ctx=4148, majf=0, minf=1 00:42:11.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:11.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.572 issued rwts: total=2048,2100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:11.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:11.572 job1: (groupid=0, jobs=1): err= 0: pid=1916510: Tue Oct 8 21:09:39 2024 00:42:11.572 read: IOPS=2265, BW=9063KiB/s (9280kB/s)(9072KiB/1001msec) 00:42:11.572 slat (nsec): min=6732, max=27999, avg=7481.42, stdev=1008.32 00:42:11.572 clat (usec): min=187, max=1407, avg=221.43, stdev=27.61 00:42:11.572 lat (usec): min=195, max=1416, avg=228.92, stdev=27.67 00:42:11.572 clat percentiles (usec): 00:42:11.572 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 210], 00:42:11.572 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:42:11.572 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:42:11.572 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 273], 00:42:11.572 | 99.99th=[ 1401] 00:42:11.572 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:42:11.572 slat (nsec): min=8344, max=37058, avg=9370.72, stdev=1608.86 00:42:11.572 clat (usec): min=145, max=321, avg=173.80, stdev=20.15 00:42:11.572 lat (usec): min=154, max=330, avg=183.17, stdev=20.45 00:42:11.572 clat percentiles (usec): 00:42:11.572 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 155], 20.00th=[ 159], 00:42:11.572 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:42:11.572 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 210], 00:42:11.572 | 99.00th=[ 231], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 306], 00:42:11.572 | 99.99th=[ 322] 00:42:11.572 bw ( KiB/s): min=10720, max=10720, per=43.95%, avg=10720.00, stdev= 0.00, samples=1 00:42:11.572 iops : min= 2680, max= 2680, avg=2680.00, stdev= 0.00, samples=1 00:42:11.572 lat (usec) : 250=98.61%, 500=1.37% 00:42:11.572 lat (msec) : 2=0.02% 00:42:11.572 cpu : usr=2.30%, sys=6.40%, ctx=4828, majf=0, minf=1 00:42:11.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:11.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.572 issued rwts: total=2268,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:11.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:11.572 job2: (groupid=0, jobs=1): err= 0: pid=1916511: Tue Oct 8 21:09:39 2024 00:42:11.572 read: IOPS=525, BW=2102KiB/s (2152kB/s)(2108KiB/1003msec) 00:42:11.572 slat (nsec): min=7977, max=24275, avg=8807.83, stdev=1076.95 00:42:11.572 clat (usec): min=212, max=41007, avg=1442.28, stdev=6774.52 00:42:11.572 lat (usec): min=220, max=41019, avg=1451.08, stdev=6775.06 00:42:11.572 clat percentiles (usec): 00:42:11.572 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:42:11.572 | 30.00th=[ 245], 40.00th=[ 262], 50.00th=[ 281], 60.00th=[ 289], 00:42:11.573 | 70.00th=[ 306], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 416], 00:42:11.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:11.573 | 99.99th=[41157] 00:42:11.573 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:42:11.573 slat (nsec): min=9791, max=28186, avg=11371.62, stdev=1639.56 00:42:11.573 clat (usec): min=160, max=349, avg=216.20, stdev=27.18 00:42:11.573 lat (usec): min=171, max=375, avg=227.57, stdev=27.59 00:42:11.573 clat percentiles (usec): 00:42:11.573 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 190], 00:42:11.573 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:42:11.573 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:42:11.573 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 351], 00:42:11.573 | 99.99th=[ 351] 00:42:11.573 bw ( KiB/s): min= 8192, max= 8192, per=33.58%, avg=8192.00, stdev= 0.00, samples=1 00:42:11.573 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:11.573 lat (usec) : 250=71.44%, 500=27.60% 00:42:11.573 lat (msec) : 50=0.97% 00:42:11.573 cpu : usr=1.40%, sys=1.90%, ctx=1551, majf=0, minf=1 00:42:11.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.573 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:11.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:11.573 job3: (groupid=0, jobs=1): err= 0: pid=1916512: Tue Oct 8 21:09:39 2024 00:42:11.573 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:42:11.573 slat (nsec): min=9641, max=16502, avg=14481.82, stdev=1408.11 00:42:11.573 clat (usec): min=40853, max=41043, avg=40976.21, stdev=37.32 00:42:11.573 lat (usec): min=40863, max=41058, avg=40990.69, stdev=37.85 00:42:11.573 clat percentiles (usec): 00:42:11.573 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:11.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:11.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:11.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:11.573 | 99.99th=[41157] 00:42:11.573 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:42:11.573 slat (nsec): min=10797, max=33329, avg=11699.09, stdev=1338.14 00:42:11.573 clat (usec): min=174, max=250, avg=206.74, stdev=13.44 00:42:11.573 lat (usec): min=185, max=284, avg=218.43, stdev=13.71 00:42:11.573 clat percentiles (usec): 00:42:11.573 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 196], 00:42:11.573 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:42:11.573 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:42:11.573 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 251], 99.95th=[ 251], 00:42:11.573 | 99.99th=[ 251] 00:42:11.573 bw ( KiB/s): min= 4096, max= 4096, per=16.79%, avg=4096.00, stdev= 0.00, samples=1 00:42:11.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:11.573 lat (usec) : 250=95.69%, 500=0.19% 00:42:11.573 lat (msec) : 50=4.12% 00:42:11.573 cpu : usr=0.59%, sys=0.59%, ctx=534, majf=0, minf=1 00:42:11.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.573 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:11.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:11.573 00:42:11.573 Run status group 0 (all jobs): 00:42:11.573 READ: bw=18.7MiB/s (19.6MB/s), 86.6KiB/s-9063KiB/s (88.7kB/s-9280kB/s), io=19.0MiB (19.9MB), run=1001-1016msec 00:42:11.573 WRITE: bw=23.8MiB/s (25.0MB/s), 2016KiB/s-9.99MiB/s (2064kB/s-10.5MB/s), io=24.2MiB (25.4MB), run=1001-1016msec 00:42:11.573 00:42:11.573 Disk stats (read/write): 00:42:11.573 nvme0n1: ios=1586/1970, merge=0/0, ticks=442/356, in_queue=798, util=85.77% 00:42:11.573 nvme0n2: ios=1956/2048, merge=0/0, ticks=421/355, in_queue=776, util=85.89% 00:42:11.573 nvme0n3: ios=523/1024, merge=0/0, ticks=594/221, in_queue=815, util=88.70% 00:42:11.573 nvme0n4: ios=19/512, merge=0/0, ticks=781/105, in_queue=886, util=89.56% 00:42:11.573 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:11.573 [global] 00:42:11.573 thread=1 00:42:11.573 invalidate=1 00:42:11.573 rw=randwrite 00:42:11.573 time_based=1 00:42:11.573 runtime=1 00:42:11.573 ioengine=libaio 00:42:11.573 direct=1 00:42:11.573 bs=4096 00:42:11.573 iodepth=1 00:42:11.573 norandommap=0 00:42:11.573 numjobs=1 00:42:11.573 00:42:11.573 verify_dump=1 00:42:11.573 verify_backlog=512 00:42:11.573 verify_state_save=0 00:42:11.573 do_verify=1 00:42:11.573 verify=crc32c-intel 00:42:11.573 [job0] 00:42:11.573 filename=/dev/nvme0n1 00:42:11.573 [job1] 00:42:11.573 filename=/dev/nvme0n2 00:42:11.573 [job2] 00:42:11.573 filename=/dev/nvme0n3 00:42:11.573 [job3] 00:42:11.573 filename=/dev/nvme0n4 00:42:11.573 Could not set queue depth (nvme0n1) 00:42:11.573 Could not set queue depth (nvme0n2) 00:42:11.573 Could not set queue depth (nvme0n3) 00:42:11.573 Could not set queue depth (nvme0n4) 00:42:11.573 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:11.573 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:11.573 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:11.573 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:11.573 fio-3.35 00:42:11.573 Starting 4 threads 00:42:12.944 00:42:12.945 job0: (groupid=0, jobs=1): err= 0: pid=1916734: Tue Oct 8 21:09:41 2024 00:42:12.945 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:42:12.945 slat (nsec): min=7243, max=19206, avg=15160.09, stdev=2039.42 00:42:12.945 clat (usec): min=40942, max=41012, avg=40980.10, stdev=17.51 00:42:12.945 lat (usec): min=40960, max=41027, avg=40995.26, stdev=17.57 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:12.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:12.945 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:12.945 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:12.945 | 99.99th=[41157] 00:42:12.945 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:42:12.945 slat (nsec): min=6337, max=26501, avg=8407.36, stdev=3062.83 00:42:12.945 clat (usec): min=142, max=434, avg=195.72, stdev=37.25 00:42:12.945 lat (usec): min=150, max=442, avg=204.13, stdev=37.96 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:42:12.945 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 200], 00:42:12.945 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 273], 00:42:12.945 | 99.00th=[ 293], 99.50th=[ 424], 99.90th=[ 433], 99.95th=[ 433], 00:42:12.945 | 99.99th=[ 433] 00:42:12.945 bw ( KiB/s): min= 4096, max= 4096, per=20.17%, avg=4096.00, stdev= 0.00, samples=1 00:42:12.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:12.945 lat (usec) : 250=88.20%, 500=7.68% 00:42:12.945 lat (msec) : 50=4.12% 00:42:12.945 cpu : usr=0.30%, sys=0.30%, ctx=534, majf=0, minf=1 00:42:12.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:12.945 job1: (groupid=0, jobs=1): err= 0: pid=1916735: Tue Oct 8 21:09:41 2024 00:42:12.945 read: IOPS=519, BW=2077KiB/s (2126kB/s)(2116KiB/1019msec) 00:42:12.945 slat (nsec): min=4954, max=34700, avg=10929.92, stdev=5473.74 00:42:12.945 clat (usec): min=214, max=41231, avg=1495.27, stdev=6980.04 00:42:12.945 lat (usec): min=219, max=41247, avg=1506.20, stdev=6980.86 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:42:12.945 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 277], 00:42:12.945 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 334], 00:42:12.945 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:12.945 | 99.99th=[41157] 00:42:12.945 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:42:12.945 slat (nsec): min=6632, max=31307, avg=8827.63, stdev=2156.36 00:42:12.945 clat (usec): min=148, max=433, avg=203.58, stdev=30.57 00:42:12.945 lat (usec): min=156, max=441, avg=212.41, stdev=31.17 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:42:12.945 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 208], 00:42:12.945 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:42:12.945 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 388], 99.95th=[ 433], 00:42:12.945 | 99.99th=[ 433] 00:42:12.945 bw ( KiB/s): min= 8192, max= 8192, per=40.33%, avg=8192.00, stdev= 0.00, samples=1 00:42:12.945 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:12.945 lat (usec) : 250=76.37%, 500=22.54%, 750=0.06% 00:42:12.945 lat (msec) : 50=1.03% 00:42:12.945 cpu : usr=0.39%, sys=1.77%, ctx=1554, majf=0, minf=1 00:42:12.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:12.945 job2: (groupid=0, jobs=1): err= 0: pid=1916736: Tue Oct 8 21:09:41 2024 00:42:12.945 read: IOPS=1812, BW=7251KiB/s (7425kB/s)(7548KiB/1041msec) 00:42:12.945 slat (nsec): min=6747, max=30005, avg=8512.04, stdev=2338.59 00:42:12.945 clat (usec): min=205, max=41024, avg=319.88, stdev=1622.95 00:42:12.945 lat (usec): min=212, max=41039, avg=328.39, stdev=1623.20 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:42:12.945 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:42:12.945 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 482], 00:42:12.945 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:42:12.945 | 99.99th=[41157] 00:42:12.945 write: IOPS=1967, BW=7869KiB/s (8058kB/s)(8192KiB/1041msec); 0 zone resets 00:42:12.945 slat (nsec): min=8368, max=34299, avg=10425.03, stdev=2344.23 00:42:12.945 clat (usec): min=142, max=800, avg=189.96, stdev=40.28 00:42:12.945 lat (usec): min=152, max=809, avg=200.38, stdev=40.95 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:42:12.945 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:42:12.945 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 233], 95.00th=[ 285], 00:42:12.945 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 490], 99.95th=[ 701], 00:42:12.945 | 99.99th=[ 799] 00:42:12.945 bw ( KiB/s): min= 8192, max= 8192, per=40.33%, avg=8192.00, stdev= 0.00, samples=2 00:42:12.945 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:42:12.945 lat (usec) : 250=81.63%, 500=17.00%, 750=1.27%, 1000=0.03% 00:42:12.945 lat (msec) : 50=0.08% 00:42:12.945 cpu : usr=2.21%, sys=5.29%, ctx=3935, majf=0, minf=1 00:42:12.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 issued rwts: total=1887,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:12.945 job3: (groupid=0, jobs=1): err= 0: pid=1916737: Tue Oct 8 21:09:41 2024 00:42:12.945 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:42:12.945 slat (nsec): min=7727, max=34753, avg=9143.23, stdev=2511.04 00:42:12.945 clat (usec): min=239, max=40992, avg=402.34, stdev=2161.29 00:42:12.945 lat (usec): min=248, max=41008, avg=411.49, stdev=2161.59 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:42:12.945 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:42:12.945 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:42:12.945 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[41157], 99.95th=[41157], 00:42:12.945 | 99.99th=[41157] 00:42:12.945 write: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec); 0 zone resets 00:42:12.945 slat (nsec): min=10082, max=34513, avg=11566.16, stdev=2268.02 00:42:12.945 clat (usec): min=162, max=908, avg=199.01, stdev=34.64 00:42:12.945 lat (usec): min=173, max=919, avg=210.57, stdev=34.90 00:42:12.945 clat percentiles (usec): 00:42:12.945 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:42:12.945 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:42:12.945 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 237], 00:42:12.945 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 693], 99.95th=[ 906], 00:42:12.945 | 99.99th=[ 906] 00:42:12.945 bw ( KiB/s): min= 5848, max= 5848, per=28.79%, avg=5848.00, stdev= 0.00, samples=1 00:42:12.945 iops : min= 1462, max= 1462, avg=1462.00, stdev= 0.00, samples=1 00:42:12.945 lat (usec) : 250=52.87%, 500=46.85%, 750=0.09%, 1000=0.03% 00:42:12.945 lat (msec) : 50=0.15% 00:42:12.945 cpu : usr=1.80%, sys=3.40%, ctx=3239, majf=0, minf=1 00:42:12.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.945 issued rwts: total=1536,1702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:12.945 00:42:12.945 Run status group 0 (all jobs): 00:42:12.945 READ: bw=14.9MiB/s (15.6MB/s), 87.3KiB/s-7251KiB/s (89.4kB/s-7425kB/s), io=15.5MiB (16.3MB), run=1001-1041msec 00:42:12.945 WRITE: bw=19.8MiB/s (20.8MB/s), 2032KiB/s-7869KiB/s (2081kB/s-8058kB/s), io=20.6MiB (21.7MB), run=1001-1041msec 00:42:12.945 00:42:12.945 Disk stats (read/write): 00:42:12.945 nvme0n1: ios=68/512, merge=0/0, ticks=767/98, in_queue=865, util=86.97% 00:42:12.945 nvme0n2: ios=563/1024, merge=0/0, ticks=1505/206, in_queue=1711, util=98.68% 00:42:12.945 nvme0n3: ios=1613/2048, merge=0/0, ticks=412/376, in_queue=788, util=88.94% 00:42:12.945 nvme0n4: ios=1179/1536, merge=0/0, ticks=1448/300, in_queue=1748, util=98.00% 00:42:12.945 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:12.945 [global] 00:42:12.945 thread=1 00:42:12.945 invalidate=1 00:42:12.945 rw=write 00:42:12.945 time_based=1 00:42:12.945 runtime=1 00:42:12.945 ioengine=libaio 00:42:12.945 direct=1 00:42:12.945 bs=4096 00:42:12.945 iodepth=128 00:42:12.945 norandommap=0 00:42:12.946 numjobs=1 00:42:12.946 00:42:12.946 verify_dump=1 00:42:12.946 verify_backlog=512 00:42:12.946 verify_state_save=0 00:42:12.946 do_verify=1 00:42:12.946 verify=crc32c-intel 00:42:12.946 [job0] 00:42:12.946 filename=/dev/nvme0n1 00:42:12.946 [job1] 00:42:12.946 filename=/dev/nvme0n2 00:42:12.946 [job2] 00:42:12.946 filename=/dev/nvme0n3 00:42:12.946 [job3] 00:42:12.946 filename=/dev/nvme0n4 00:42:12.946 Could not set queue depth (nvme0n1) 00:42:12.946 Could not set queue depth (nvme0n2) 00:42:12.946 Could not set queue depth (nvme0n3) 00:42:12.946 Could not set queue depth (nvme0n4) 00:42:12.946 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:12.946 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:12.946 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:12.946 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:12.946 fio-3.35 00:42:12.946 Starting 4 threads 00:42:14.316 00:42:14.316 job0: (groupid=0, jobs=1): err= 0: pid=1917086: Tue Oct 8 21:09:42 2024 00:42:14.316 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:42:14.316 slat (usec): min=2, max=14119, avg=129.23, stdev=899.80 00:42:14.316 clat (usec): min=4416, max=45029, avg=16049.02, stdev=6200.32 00:42:14.316 lat (usec): min=4434, max=45038, avg=16178.25, stdev=6259.05 00:42:14.316 clat percentiles (usec): 00:42:14.316 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[10421], 20.00th=[11731], 00:42:14.316 | 30.00th=[12387], 40.00th=[13698], 50.00th=[14746], 60.00th=[15664], 00:42:14.316 | 70.00th=[16712], 80.00th=[19530], 90.00th=[23725], 95.00th=[27395], 00:42:14.316 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:42:14.316 | 99.99th=[44827] 00:42:14.316 write: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1008msec); 0 zone resets 00:42:14.316 slat (usec): min=4, max=12233, avg=123.26, stdev=696.45 00:42:14.316 clat (usec): min=229, max=57516, avg=17644.09, stdev=8349.13 00:42:14.316 lat (usec): min=659, max=57523, avg=17767.35, stdev=8391.27 00:42:14.316 clat percentiles (usec): 00:42:14.316 | 1.00th=[ 2180], 5.00th=[ 4555], 10.00th=[ 7439], 20.00th=[10421], 00:42:14.316 | 30.00th=[11994], 40.00th=[14091], 50.00th=[16581], 60.00th=[20055], 00:42:14.316 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25822], 95.00th=[31589], 00:42:14.316 | 99.00th=[36439], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:42:14.316 | 99.99th=[57410] 00:42:14.316 bw ( KiB/s): min=14192, max=16552, per=22.59%, avg=15372.00, stdev=1668.77, samples=2 00:42:14.316 iops : min= 3548, max= 4138, avg=3843.00, stdev=417.19, samples=2 00:42:14.316 lat (usec) : 250=0.01%, 750=0.07%, 1000=0.08% 00:42:14.316 lat (msec) : 2=0.28%, 4=1.40%, 10=11.67%, 20=57.38%, 50=28.89% 00:42:14.316 lat (msec) : 100=0.21% 00:42:14.316 cpu : usr=3.67%, sys=5.26%, ctx=338, majf=0, minf=2 00:42:14.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:14.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.316 issued rwts: total=3584,3971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.316 job1: (groupid=0, jobs=1): err= 0: pid=1917089: Tue Oct 8 21:09:42 2024 00:42:14.316 read: IOPS=5595, BW=21.9MiB/s (22.9MB/s)(21.9MiB/1004msec) 00:42:14.316 slat (usec): min=2, max=28487, avg=87.16, stdev=620.71 00:42:14.316 clat (usec): min=777, max=40451, avg=11573.86, stdev=3944.54 00:42:14.316 lat (usec): min=3012, max=49885, avg=11661.02, stdev=3991.47 00:42:14.316 clat percentiles (usec): 00:42:14.317 | 1.00th=[ 6521], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10159], 00:42:14.317 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:42:14.317 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13173], 95.00th=[14615], 00:42:14.317 | 99.00th=[34866], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:42:14.317 | 99.99th=[40633] 00:42:14.317 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:42:14.317 slat (usec): min=4, max=11759, avg=78.69, stdev=493.54 00:42:14.317 clat (usec): min=666, max=28795, avg=11061.97, stdev=2558.31 00:42:14.317 lat (usec): min=3534, max=28802, avg=11140.66, stdev=2585.31 00:42:14.317 clat percentiles (usec): 00:42:14.317 | 1.00th=[ 6390], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[10290], 00:42:14.317 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:42:14.317 | 70.00th=[11207], 80.00th=[11600], 90.00th=[13173], 95.00th=[13960], 00:42:14.317 | 99.00th=[23725], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:42:14.317 | 99.99th=[28705] 00:42:14.317 bw ( KiB/s): min=21888, max=23168, per=33.11%, avg=22528.00, stdev=905.10, samples=2 00:42:14.317 iops : min= 5472, max= 5792, avg=5632.00, stdev=226.27, samples=2 00:42:14.317 lat (usec) : 750=0.01%, 1000=0.01% 00:42:14.317 lat (msec) : 4=0.16%, 10=17.24%, 20=80.30%, 50=2.28% 00:42:14.317 cpu : usr=5.08%, sys=9.07%, ctx=473, majf=0, minf=1 00:42:14.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:42:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.317 issued rwts: total=5618,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.317 job2: (groupid=0, jobs=1): err= 0: pid=1917090: Tue Oct 8 21:09:42 2024 00:42:14.317 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:42:14.317 slat (usec): min=2, max=25916, avg=133.09, stdev=881.85 00:42:14.317 clat (msec): min=6, max=108, avg=19.16, stdev=15.29 00:42:14.317 lat (msec): min=9, max=113, avg=19.30, stdev=15.35 00:42:14.317 clat percentiles (msec): 00:42:14.317 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:42:14.317 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:42:14.317 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 42], 95.00th=[ 52], 00:42:14.317 | 99.00th=[ 95], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:42:14.317 | 99.99th=[ 109] 00:42:14.317 write: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1005msec); 0 zone resets 00:42:14.317 slat (usec): min=4, max=19758, avg=160.36, stdev=877.50 00:42:14.317 clat (usec): min=1471, max=106409, avg=18893.08, stdev=14243.98 00:42:14.317 lat (msec): min=6, max=106, avg=19.05, stdev=14.35 00:42:14.317 clat percentiles (msec): 00:42:14.317 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:42:14.317 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:42:14.317 | 70.00th=[ 19], 80.00th=[ 24], 90.00th=[ 29], 95.00th=[ 41], 00:42:14.317 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 107], 99.95th=[ 107], 00:42:14.317 | 99.99th=[ 107] 00:42:14.317 bw ( KiB/s): min= 9176, max=18404, per=20.27%, avg=13790.00, stdev=6525.18, samples=2 00:42:14.317 iops : min= 2294, max= 4601, avg=3447.50, stdev=1631.30, samples=2 00:42:14.317 lat (msec) : 2=0.02%, 10=2.14%, 20=74.60%, 50=18.37%, 100=4.47% 00:42:14.317 lat (msec) : 250=0.41% 00:42:14.317 cpu : usr=2.19%, sys=4.88%, ctx=431, majf=0, minf=1 00:42:14.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.317 issued rwts: total=3072,3571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.317 job3: (groupid=0, jobs=1): err= 0: pid=1917091: Tue Oct 8 21:09:42 2024 00:42:14.317 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:42:14.317 slat (usec): min=2, max=15219, avg=124.32, stdev=958.82 00:42:14.317 clat (usec): min=6294, max=53055, avg=15788.14, stdev=8603.26 00:42:14.317 lat (usec): min=6302, max=53059, avg=15912.46, stdev=8676.33 00:42:14.317 clat percentiles (usec): 00:42:14.317 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11338], 00:42:14.317 | 30.00th=[11863], 40.00th=[12256], 50.00th=[13042], 60.00th=[13960], 00:42:14.317 | 70.00th=[15795], 80.00th=[18220], 90.00th=[21365], 95.00th=[35914], 00:42:14.317 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:42:14.317 | 99.99th=[53216] 00:42:14.317 write: IOPS=3978, BW=15.5MiB/s (16.3MB/s)(15.7MiB/1011msec); 0 zone resets 00:42:14.317 slat (usec): min=4, max=20986, avg=124.46, stdev=796.41 00:42:14.317 clat (usec): min=1623, max=71013, avg=17718.85, stdev=9260.24 00:42:14.317 lat (usec): min=2771, max=71024, avg=17843.31, stdev=9325.80 00:42:14.317 clat percentiles (usec): 00:42:14.317 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 8225], 20.00th=[11207], 00:42:14.317 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13698], 60.00th=[17171], 00:42:14.317 | 70.00th=[22938], 80.00th=[24773], 90.00th=[30278], 95.00th=[35390], 00:42:14.317 | 99.00th=[53216], 99.50th=[53740], 99.90th=[64750], 99.95th=[64750], 00:42:14.317 | 99.99th=[70779] 00:42:14.317 bw ( KiB/s): min=12288, max=18872, per=22.90%, avg=15580.00, stdev=4655.59, samples=2 00:42:14.317 iops : min= 3072, max= 4718, avg=3895.00, stdev=1163.90, samples=2 00:42:14.317 lat (msec) : 2=0.01%, 4=0.11%, 10=11.45%, 20=64.38%, 50=21.92% 00:42:14.317 lat (msec) : 100=2.13% 00:42:14.317 cpu : usr=3.27%, sys=5.25%, ctx=326, majf=0, minf=1 00:42:14.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:14.317 issued rwts: total=3584,4022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:14.317 00:42:14.317 Run status group 0 (all jobs): 00:42:14.317 READ: bw=61.3MiB/s (64.2MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-22.9MB/s), io=61.9MiB (65.0MB), run=1004-1011msec 00:42:14.317 WRITE: bw=66.4MiB/s (69.7MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=67.2MiB (70.4MB), run=1004-1011msec 00:42:14.317 00:42:14.317 Disk stats (read/write): 00:42:14.317 nvme0n1: ios=2912/3072, merge=0/0, ticks=47429/52963, in_queue=100392, util=97.60% 00:42:14.317 nvme0n2: ios=4659/4723, merge=0/0, ticks=25313/24051, in_queue=49364, util=97.66% 00:42:14.317 nvme0n3: ios=2905/3072, merge=0/0, ticks=13243/16714, in_queue=29957, util=97.16% 00:42:14.317 nvme0n4: ios=3129/3147, merge=0/0, ticks=33021/31315, in_queue=64336, util=97.35% 00:42:14.317 21:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:14.317 [global] 00:42:14.317 thread=1 00:42:14.317 invalidate=1 00:42:14.317 rw=randwrite 00:42:14.317 time_based=1 00:42:14.317 runtime=1 00:42:14.317 ioengine=libaio 00:42:14.317 direct=1 00:42:14.317 bs=4096 00:42:14.317 iodepth=128 00:42:14.317 norandommap=0 00:42:14.317 numjobs=1 00:42:14.317 00:42:14.317 verify_dump=1 00:42:14.317 verify_backlog=512 00:42:14.317 verify_state_save=0 00:42:14.317 do_verify=1 00:42:14.317 verify=crc32c-intel 00:42:14.317 [job0] 00:42:14.317 filename=/dev/nvme0n1 00:42:14.317 [job1] 00:42:14.317 filename=/dev/nvme0n2 00:42:14.317 [job2] 00:42:14.317 filename=/dev/nvme0n3 00:42:14.317 [job3] 00:42:14.317 filename=/dev/nvme0n4 00:42:14.317 Could not set queue depth (nvme0n1) 00:42:14.317 Could not set queue depth (nvme0n2) 00:42:14.317 Could not set queue depth (nvme0n3) 00:42:14.317 Could not set queue depth (nvme0n4) 00:42:14.574 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:14.574 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:14.574 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:14.574 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:14.574 fio-3.35 00:42:14.574 Starting 4 threads 00:42:15.947 00:42:15.947 job0: (groupid=0, jobs=1): err= 0: pid=1917318: Tue Oct 8 21:09:44 2024 00:42:15.947 read: IOPS=4561, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1005msec) 00:42:15.947 slat (usec): min=3, max=16773, avg=104.69, stdev=775.71 00:42:15.947 clat (usec): min=1646, max=38393, avg=12712.52, stdev=5228.77 00:42:15.947 lat (usec): min=2630, max=38401, avg=12817.22, stdev=5272.53 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:42:15.947 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11338], 00:42:15.947 | 70.00th=[12780], 80.00th=[16581], 90.00th=[19268], 95.00th=[24511], 00:42:15.947 | 99.00th=[32113], 99.50th=[34341], 99.90th=[37487], 99.95th=[38536], 00:42:15.947 | 99.99th=[38536] 00:42:15.947 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:42:15.947 slat (usec): min=5, max=23991, avg=105.00, stdev=677.76 00:42:15.947 clat (usec): min=2294, max=40081, avg=14938.88, stdev=7334.25 00:42:15.947 lat (usec): min=2302, max=40104, avg=15043.88, stdev=7387.87 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 4228], 5.00th=[ 6718], 10.00th=[ 7570], 20.00th=[ 8848], 00:42:15.947 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11338], 60.00th=[14222], 00:42:15.947 | 70.00th=[20055], 80.00th=[20579], 90.00th=[26608], 95.00th=[29754], 00:42:15.947 | 99.00th=[31327], 99.50th=[31327], 99.90th=[35914], 99.95th=[38536], 00:42:15.947 | 99.99th=[40109] 00:42:15.947 bw ( KiB/s): min=16384, max=20480, per=30.69%, avg=18432.00, stdev=2896.31, samples=2 00:42:15.947 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:42:15.947 lat (msec) : 2=0.01%, 4=0.61%, 10=36.37%, 20=43.99%, 50=19.02% 00:42:15.947 cpu : usr=4.08%, sys=6.87%, ctx=439, majf=0, minf=1 00:42:15.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:42:15.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:15.947 issued rwts: total=4584,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:15.947 job1: (groupid=0, jobs=1): err= 0: pid=1917319: Tue Oct 8 21:09:44 2024 00:42:15.947 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:42:15.947 slat (usec): min=3, max=46575, avg=150.69, stdev=1121.55 00:42:15.947 clat (usec): min=7686, max=74060, avg=18970.51, stdev=14155.98 00:42:15.947 lat (usec): min=7694, max=74064, avg=19121.20, stdev=14225.95 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10814], 00:42:15.947 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[14091], 00:42:15.947 | 70.00th=[15270], 80.00th=[29492], 90.00th=[38536], 95.00th=[44303], 00:42:15.947 | 99.00th=[72877], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:42:15.947 | 99.99th=[73925] 00:42:15.947 write: IOPS=3907, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1002msec); 0 zone resets 00:42:15.947 slat (usec): min=3, max=9247, avg=110.93, stdev=650.01 00:42:15.947 clat (usec): min=421, max=31069, avg=15007.23, stdev=6688.21 00:42:15.947 lat (usec): min=2698, max=31081, avg=15118.16, stdev=6700.32 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 5538], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[10421], 00:42:15.947 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[12649], 00:42:15.947 | 70.00th=[16581], 80.00th=[22938], 90.00th=[26346], 95.00th=[28705], 00:42:15.947 | 99.00th=[29754], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:42:15.947 | 99.99th=[31065] 00:42:15.947 bw ( KiB/s): min=12040, max=18256, per=25.22%, avg=15148.00, stdev=4395.38, samples=2 00:42:15.947 iops : min= 3010, max= 4564, avg=3787.00, stdev=1098.84, samples=2 00:42:15.947 lat (usec) : 500=0.01% 00:42:15.947 lat (msec) : 4=0.43%, 10=8.36%, 20=65.02%, 50=24.38%, 100=1.80% 00:42:15.947 cpu : usr=3.10%, sys=5.49%, ctx=313, majf=0, minf=1 00:42:15.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:15.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:15.947 issued rwts: total=3584,3915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:15.947 job2: (groupid=0, jobs=1): err= 0: pid=1917320: Tue Oct 8 21:09:44 2024 00:42:15.947 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:42:15.947 slat (usec): min=4, max=8360, avg=119.81, stdev=724.42 00:42:15.947 clat (usec): min=8033, max=27911, avg=14900.04, stdev=2688.85 00:42:15.947 lat (usec): min=8040, max=27925, avg=15019.85, stdev=2755.61 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11994], 20.00th=[13042], 00:42:15.947 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:42:15.947 | 70.00th=[15795], 80.00th=[16712], 90.00th=[18220], 95.00th=[19268], 00:42:15.947 | 99.00th=[23987], 99.50th=[25035], 99.90th=[27919], 99.95th=[27919], 00:42:15.947 | 99.99th=[27919] 00:42:15.947 write: IOPS=3729, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1008msec); 0 zone resets 00:42:15.947 slat (usec): min=5, max=6935, avg=144.41, stdev=669.75 00:42:15.947 clat (usec): min=6789, max=39481, avg=19672.50, stdev=8276.57 00:42:15.947 lat (usec): min=7622, max=39491, avg=19816.91, stdev=8341.82 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11994], 20.00th=[12649], 00:42:15.947 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15008], 60.00th=[20579], 00:42:15.947 | 70.00th=[25560], 80.00th=[28181], 90.00th=[32375], 95.00th=[35390], 00:42:15.947 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:42:15.947 | 99.99th=[39584] 00:42:15.947 bw ( KiB/s): min=12672, max=16384, per=24.19%, avg=14528.00, stdev=2624.78, samples=2 00:42:15.947 iops : min= 3168, max= 4096, avg=3632.00, stdev=656.20, samples=2 00:42:15.947 lat (msec) : 10=4.64%, 20=72.33%, 50=23.03% 00:42:15.947 cpu : usr=2.88%, sys=6.55%, ctx=353, majf=0, minf=1 00:42:15.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:42:15.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:15.947 issued rwts: total=3584,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:15.947 job3: (groupid=0, jobs=1): err= 0: pid=1917321: Tue Oct 8 21:09:44 2024 00:42:15.947 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:42:15.947 slat (usec): min=2, max=18265, avg=158.83, stdev=992.54 00:42:15.947 clat (usec): min=10160, max=80720, avg=22831.32, stdev=11087.35 00:42:15.947 lat (usec): min=10169, max=85744, avg=22990.14, stdev=11108.61 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[10290], 5.00th=[13304], 10.00th=[15401], 20.00th=[15664], 00:42:15.947 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18744], 60.00th=[20055], 00:42:15.947 | 70.00th=[23200], 80.00th=[27395], 90.00th=[39060], 95.00th=[42730], 00:42:15.947 | 99.00th=[70779], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:42:15.947 | 99.99th=[80217] 00:42:15.947 write: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1006msec); 0 zone resets 00:42:15.947 slat (usec): min=3, max=16525, avg=197.64, stdev=883.96 00:42:15.947 clat (usec): min=2606, max=89686, avg=24173.36, stdev=14262.76 00:42:15.947 lat (usec): min=2627, max=89694, avg=24371.01, stdev=14355.47 00:42:15.947 clat percentiles (usec): 00:42:15.947 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[14353], 20.00th=[14877], 00:42:15.947 | 30.00th=[15795], 40.00th=[19006], 50.00th=[20055], 60.00th=[20317], 00:42:15.947 | 70.00th=[23987], 80.00th=[31327], 90.00th=[44827], 95.00th=[54264], 00:42:15.947 | 99.00th=[84411], 99.50th=[85459], 99.90th=[89654], 99.95th=[89654], 00:42:15.947 | 99.99th=[89654] 00:42:15.947 bw ( KiB/s): min= 9536, max=12288, per=18.17%, avg=10912.00, stdev=1945.96, samples=2 00:42:15.947 iops : min= 2384, max= 3072, avg=2728.00, stdev=486.49, samples=2 00:42:15.947 lat (msec) : 4=0.07%, 10=2.86%, 20=50.73%, 50=41.48%, 100=4.86% 00:42:15.947 cpu : usr=2.29%, sys=3.48%, ctx=353, majf=0, minf=1 00:42:15.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:42:15.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:15.947 issued rwts: total=2560,2855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:15.947 00:42:15.947 Run status group 0 (all jobs): 00:42:15.947 READ: bw=55.5MiB/s (58.2MB/s), 9.94MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=55.9MiB (58.6MB), run=1002-1008msec 00:42:15.947 WRITE: bw=58.7MiB/s (61.5MB/s), 11.1MiB/s-17.9MiB/s (11.6MB/s-18.8MB/s), io=59.1MiB (62.0MB), run=1002-1008msec 00:42:15.947 00:42:15.947 Disk stats (read/write): 00:42:15.947 nvme0n1: ios=3621/3650, merge=0/0, ticks=42905/57935, in_queue=100840, util=100.00% 00:42:15.947 nvme0n2: ios=2599/3072, merge=0/0, ticks=14737/11698, in_queue=26435, util=85.66% 00:42:15.947 nvme0n3: ios=3122/3303, merge=0/0, ticks=22847/28935, in_queue=51782, util=96.96% 00:42:15.947 nvme0n4: ios=2048/2560, merge=0/0, ticks=17049/20912, in_queue=37961, util=89.62% 00:42:15.947 21:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:15.947 21:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1917453 00:42:15.947 21:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:15.947 21:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:15.947 [global] 00:42:15.947 thread=1 00:42:15.947 invalidate=1 00:42:15.947 rw=read 00:42:15.947 time_based=1 00:42:15.947 runtime=10 00:42:15.947 ioengine=libaio 00:42:15.947 direct=1 00:42:15.947 bs=4096 00:42:15.947 iodepth=1 00:42:15.947 norandommap=1 00:42:15.947 numjobs=1 00:42:15.947 00:42:15.947 [job0] 00:42:15.947 filename=/dev/nvme0n1 00:42:15.947 [job1] 00:42:15.947 filename=/dev/nvme0n2 00:42:15.947 [job2] 00:42:15.947 filename=/dev/nvme0n3 00:42:15.947 [job3] 00:42:15.947 filename=/dev/nvme0n4 00:42:15.947 Could not set queue depth (nvme0n1) 00:42:15.947 Could not set queue depth (nvme0n2) 00:42:15.947 Could not set queue depth (nvme0n3) 00:42:15.947 Could not set queue depth (nvme0n4) 00:42:15.947 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.947 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.947 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.947 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.947 fio-3.35 00:42:15.947 Starting 4 threads 00:42:19.225 21:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:19.225 21:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:19.225 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4943872, buflen=4096 00:42:19.225 fio: pid=1917550, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:19.482 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:19.482 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:19.482 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9502720, buflen=4096 00:42:19.482 fio: pid=1917549, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:19.751 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56561664, buflen=4096 00:42:19.751 fio: pid=1917547, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:19.751 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:19.751 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:20.011 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=573440, buflen=4096 00:42:20.011 fio: pid=1917548, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:42:20.011 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:20.011 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:20.011 00:42:20.011 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917547: Tue Oct 8 21:09:48 2024 00:42:20.011 read: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(53.9MiB/3533msec) 00:42:20.011 slat (usec): min=4, max=17737, avg=12.66, stdev=205.50 00:42:20.011 clat (usec): min=189, max=619, avg=240.44, stdev=41.41 00:42:20.011 lat (usec): min=195, max=17979, avg=253.11, stdev=210.57 00:42:20.011 clat percentiles (usec): 00:42:20.011 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:42:20.011 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:42:20.011 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 289], 95.00th=[ 314], 00:42:20.011 | 99.00th=[ 400], 99.50th=[ 457], 99.90th=[ 515], 99.95th=[ 545], 00:42:20.011 | 99.99th=[ 611] 00:42:20.011 bw ( KiB/s): min=15016, max=17288, per=87.42%, avg=15853.33, stdev=1078.42, samples=6 00:42:20.011 iops : min= 3754, max= 4322, avg=3963.33, stdev=269.61, samples=6 00:42:20.011 lat (usec) : 250=72.55%, 500=27.28%, 750=0.16% 00:42:20.011 cpu : usr=1.30%, sys=4.33%, ctx=13815, majf=0, minf=1 00:42:20.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:20.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 issued rwts: total=13810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:20.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:20.011 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1917548: Tue Oct 8 21:09:48 2024 00:42:20.011 read: IOPS=36, BW=145KiB/s (149kB/s)(560KiB/3855msec) 00:42:20.011 slat (usec): min=6, max=8806, avg=167.19, stdev=1045.86 00:42:20.011 clat (usec): min=218, max=41987, avg=27333.70, stdev=19286.67 00:42:20.011 lat (usec): min=225, max=49982, avg=27453.95, stdev=19385.60 00:42:20.011 clat percentiles (usec): 00:42:20.011 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 249], 20.00th=[ 277], 00:42:20.011 | 30.00th=[ 363], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:20.011 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:20.011 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:20.011 | 99.99th=[42206] 00:42:20.011 bw ( KiB/s): min= 96, max= 304, per=0.74%, avg=134.29, stdev=75.54, samples=7 00:42:20.011 iops : min= 24, max= 76, avg=33.57, stdev=18.88, samples=7 00:42:20.011 lat (usec) : 250=10.64%, 500=19.86%, 750=2.84% 00:42:20.011 lat (msec) : 50=65.96% 00:42:20.011 cpu : usr=0.00%, sys=0.26%, ctx=144, majf=0, minf=2 00:42:20.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:20.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:20.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:20.011 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917549: Tue Oct 8 21:09:48 2024 00:42:20.011 read: IOPS=714, BW=2855KiB/s (2924kB/s)(9280KiB/3250msec) 00:42:20.011 slat (nsec): min=6967, max=45584, avg=9200.52, stdev=3228.89 00:42:20.011 clat (usec): min=199, max=43011, avg=1377.84, stdev=6621.94 00:42:20.011 lat (usec): min=206, max=43034, avg=1387.04, stdev=6623.58 00:42:20.011 clat percentiles (usec): 00:42:20.011 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:42:20.011 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:42:20.011 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 338], 00:42:20.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:20.011 | 99.99th=[43254] 00:42:20.011 bw ( KiB/s): min= 96, max= 6024, per=10.92%, avg=1981.33, stdev=2876.47, samples=6 00:42:20.011 iops : min= 24, max= 1506, avg=495.33, stdev=719.12, samples=6 00:42:20.011 lat (usec) : 250=22.15%, 500=74.88%, 750=0.17% 00:42:20.011 lat (msec) : 4=0.04%, 50=2.71% 00:42:20.011 cpu : usr=0.52%, sys=0.49%, ctx=2324, majf=0, minf=2 00:42:20.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:20.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.011 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:20.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:20.011 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917550: Tue Oct 8 21:09:48 2024 00:42:20.011 read: IOPS=408, BW=1634KiB/s (1674kB/s)(4828KiB/2954msec) 00:42:20.011 slat (nsec): min=6993, max=34297, avg=9824.61, stdev=3389.09 00:42:20.011 clat (usec): min=224, max=41106, avg=2412.84, stdev=9045.58 00:42:20.011 lat (usec): min=232, max=41118, avg=2422.67, stdev=9047.59 00:42:20.011 clat percentiles (usec): 00:42:20.011 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 255], 00:42:20.011 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:42:20.011 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[40633], 00:42:20.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:20.011 | 99.99th=[41157] 00:42:20.011 bw ( KiB/s): min= 96, max= 5864, per=7.08%, avg=1284.80, stdev=2560.62, samples=5 00:42:20.011 iops : min= 24, max= 1466, avg=321.20, stdev=640.15, samples=5 00:42:20.011 lat (usec) : 250=14.65%, 500=78.73%, 750=1.24% 00:42:20.011 lat (msec) : 4=0.08%, 50=5.22% 00:42:20.011 cpu : usr=0.24%, sys=0.61%, ctx=1211, majf=0, minf=1 00:42:20.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:20.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.012 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.012 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:20.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:20.012 00:42:20.012 Run status group 0 (all jobs): 00:42:20.012 READ: bw=17.7MiB/s (18.6MB/s), 145KiB/s-15.3MiB/s (149kB/s-16.0MB/s), io=68.3MiB (71.6MB), run=2954-3855msec 00:42:20.012 00:42:20.012 Disk stats (read/write): 00:42:20.012 nvme0n1: ios=13037/0, merge=0/0, ticks=3084/0, in_queue=3084, util=94.11% 00:42:20.012 nvme0n2: ios=182/0, merge=0/0, ticks=4751/0, in_queue=4751, util=98.82% 00:42:20.012 nvme0n3: ios=1747/0, merge=0/0, ticks=3170/0, in_queue=3170, util=100.00% 00:42:20.012 nvme0n4: ios=879/0, merge=0/0, ticks=3249/0, in_queue=3249, util=100.00% 00:42:20.269 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:20.269 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:21.202 21:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:21.202 21:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:21.460 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:21.460 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:21.717 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:21.717 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1917453 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:21.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:21.975 nvmf hotplug test: fio failed as expected 00:42:21.975 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:22.233 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:22.491 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:22.491 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:22.491 rmmod nvme_tcp 00:42:22.491 rmmod nvme_fabrics 00:42:22.491 rmmod nvme_keyring 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1914919 ']' 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1914919 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1914919 ']' 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1914919 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1914919 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1914919' 00:42:22.491 killing process with pid 1914919 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1914919 00:42:22.491 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1914919 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:23.061 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:42:23.062 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:23.062 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:23.062 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.062 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:23.062 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:24.972 00:42:24.972 real 0m31.657s 00:42:24.972 user 1m22.058s 00:42:24.972 sys 0m12.613s 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:24.972 ************************************ 00:42:24.972 END TEST nvmf_fio_target 00:42:24.972 ************************************ 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:24.972 ************************************ 00:42:24.972 START TEST nvmf_bdevio 00:42:24.972 ************************************ 00:42:24.972 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:25.233 * Looking for test storage... 00:42:25.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:25.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.233 --rc genhtml_branch_coverage=1 00:42:25.233 --rc genhtml_function_coverage=1 00:42:25.233 --rc genhtml_legend=1 00:42:25.233 --rc geninfo_all_blocks=1 00:42:25.233 --rc geninfo_unexecuted_blocks=1 00:42:25.233 00:42:25.233 ' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:25.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.233 --rc genhtml_branch_coverage=1 00:42:25.233 --rc genhtml_function_coverage=1 00:42:25.233 --rc genhtml_legend=1 00:42:25.233 --rc geninfo_all_blocks=1 00:42:25.233 --rc geninfo_unexecuted_blocks=1 00:42:25.233 00:42:25.233 ' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:25.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.233 --rc genhtml_branch_coverage=1 00:42:25.233 --rc genhtml_function_coverage=1 00:42:25.233 --rc genhtml_legend=1 00:42:25.233 --rc geninfo_all_blocks=1 00:42:25.233 --rc geninfo_unexecuted_blocks=1 00:42:25.233 00:42:25.233 ' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:25.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.233 --rc genhtml_branch_coverage=1 00:42:25.233 --rc genhtml_function_coverage=1 00:42:25.233 --rc genhtml_legend=1 00:42:25.233 --rc geninfo_all_blocks=1 00:42:25.233 --rc geninfo_unexecuted_blocks=1 00:42:25.233 00:42:25.233 ' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:25.233 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:25.234 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:27.780 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:27.780 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:27.780 Found net devices under 0000:84:00.0: cvl_0_0 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:27.780 Found net devices under 0000:84:00.1: cvl_0_1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:27.780 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:27.781 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:28.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:28.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:42:28.040 00:42:28.040 --- 10.0.0.2 ping statistics --- 00:42:28.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.040 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:28.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:28.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:42:28.040 00:42:28.040 --- 10.0.0.1 ping statistics --- 00:42:28.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.040 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:28.040 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1920317 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1920317 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1920317 ']' 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:28.041 21:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.041 [2024-10-08 21:09:56.713997] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:28.041 [2024-10-08 21:09:56.716779] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:42:28.041 [2024-10-08 21:09:56.716895] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:28.300 [2024-10-08 21:09:56.831987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:28.300 [2024-10-08 21:09:56.964225] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.300 [2024-10-08 21:09:56.964293] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.300 [2024-10-08 21:09:56.964310] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.300 [2024-10-08 21:09:56.964325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.300 [2024-10-08 21:09:56.964336] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.300 [2024-10-08 21:09:56.966265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:42:28.300 [2024-10-08 21:09:56.966377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:42:28.300 [2024-10-08 21:09:56.966449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:42:28.300 [2024-10-08 21:09:56.966453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:42:28.559 [2024-10-08 21:09:57.078458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:28.559 [2024-10-08 21:09:57.078684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:28.559 [2024-10-08 21:09:57.079006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:28.559 [2024-10-08 21:09:57.079585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:28.559 [2024-10-08 21:09:57.079859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 [2024-10-08 21:09:57.255230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 Malloc0 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.559 [2024-10-08 21:09:57.315404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:42:28.559 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:28.818 { 00:42:28.818 "params": { 00:42:28.818 "name": "Nvme$subsystem", 00:42:28.818 "trtype": "$TEST_TRANSPORT", 00:42:28.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:28.818 "adrfam": "ipv4", 00:42:28.818 "trsvcid": "$NVMF_PORT", 00:42:28.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:28.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:28.818 "hdgst": ${hdgst:-false}, 00:42:28.818 "ddgst": ${ddgst:-false} 00:42:28.818 }, 00:42:28.818 "method": "bdev_nvme_attach_controller" 00:42:28.818 } 00:42:28.818 EOF 00:42:28.818 )") 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:42:28.818 21:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:28.818 "params": { 00:42:28.818 "name": "Nvme1", 00:42:28.818 "trtype": "tcp", 00:42:28.818 "traddr": "10.0.0.2", 00:42:28.818 "adrfam": "ipv4", 00:42:28.818 "trsvcid": "4420", 00:42:28.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:28.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:28.818 "hdgst": false, 00:42:28.818 "ddgst": false 00:42:28.818 }, 00:42:28.818 "method": "bdev_nvme_attach_controller" 00:42:28.818 }' 00:42:28.818 [2024-10-08 21:09:57.376187] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:42:28.818 [2024-10-08 21:09:57.376288] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920470 ] 00:42:28.819 [2024-10-08 21:09:57.449544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:28.819 [2024-10-08 21:09:57.568221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.819 [2024-10-08 21:09:57.568273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.819 [2024-10-08 21:09:57.568277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.077 I/O targets: 00:42:29.077 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:29.077 00:42:29.077 00:42:29.077 CUnit - A unit testing framework for C - Version 2.1-3 00:42:29.077 http://cunit.sourceforge.net/ 00:42:29.077 00:42:29.077 00:42:29.077 Suite: bdevio tests on: Nvme1n1 00:42:29.077 Test: blockdev write read block ...passed 00:42:29.077 Test: blockdev write zeroes read block ...passed 00:42:29.077 Test: blockdev write zeroes read no split ...passed 00:42:29.336 Test: blockdev write zeroes read split ...passed 00:42:29.336 Test: blockdev write zeroes read split partial ...passed 00:42:29.336 Test: blockdev reset ...[2024-10-08 21:09:57.860462] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:29.336 [2024-10-08 21:09:57.860574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4df40 (9): Bad file descriptor 00:42:29.336 [2024-10-08 21:09:57.912128] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:29.336 passed 00:42:29.336 Test: blockdev write read 8 blocks ...passed 00:42:29.336 Test: blockdev write read size > 128k ...passed 00:42:29.336 Test: blockdev write read invalid size ...passed 00:42:29.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:29.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:29.336 Test: blockdev write read max offset ...passed 00:42:29.336 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:29.337 Test: blockdev writev readv 8 blocks ...passed 00:42:29.337 Test: blockdev writev readv 30 x 1block ...passed 00:42:29.337 Test: blockdev writev readv block ...passed 00:42:29.337 Test: blockdev writev readv size > 128k ...passed 00:42:29.337 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:29.337 Test: blockdev comparev and writev ...[2024-10-08 21:09:58.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.084260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.084285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.084302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.084687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.084713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.084736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.085156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.085181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.085203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.085220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.085591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.085616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:29.337 [2024-10-08 21:09:58.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:29.337 [2024-10-08 21:09:58.085662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:29.595 passed 00:42:29.595 Test: blockdev nvme passthru rw ...passed 00:42:29.595 Test: blockdev nvme passthru vendor specific ...[2024-10-08 21:09:58.168021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:29.595 [2024-10-08 21:09:58.168052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:29.595 [2024-10-08 21:09:58.168202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:29.595 [2024-10-08 21:09:58.168225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:29.595 [2024-10-08 21:09:58.168371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:29.595 [2024-10-08 21:09:58.168396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:29.595 [2024-10-08 21:09:58.168541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:29.595 [2024-10-08 21:09:58.168565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:29.595 passed 00:42:29.595 Test: blockdev nvme admin passthru ...passed 00:42:29.595 Test: blockdev copy ...passed 00:42:29.595 00:42:29.595 Run Summary: Type Total Ran Passed Failed Inactive 00:42:29.595 suites 1 1 n/a 0 0 00:42:29.595 tests 23 23 23 0 0 00:42:29.595 asserts 152 152 152 0 n/a 00:42:29.595 00:42:29.595 Elapsed time = 0.945 seconds 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:29.854 rmmod nvme_tcp 00:42:29.854 rmmod nvme_fabrics 00:42:29.854 rmmod nvme_keyring 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1920317 ']' 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1920317 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1920317 ']' 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1920317 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1920317 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1920317' 00:42:29.854 killing process with pid 1920317 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1920317 00:42:29.854 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1920317 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:30.422 21:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:32.374 00:42:32.374 real 0m7.278s 00:42:32.374 user 0m8.675s 00:42:32.374 sys 0m3.109s 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:32.374 ************************************ 00:42:32.374 END TEST nvmf_bdevio 00:42:32.374 ************************************ 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:32.374 00:42:32.374 real 4m49.493s 00:42:32.374 user 10m5.893s 00:42:32.374 sys 1m46.721s 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:32.374 21:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:32.374 ************************************ 00:42:32.374 END TEST nvmf_target_core_interrupt_mode 00:42:32.374 ************************************ 00:42:32.374 21:10:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:32.374 21:10:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:32.374 21:10:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:32.374 21:10:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:32.374 ************************************ 00:42:32.374 START TEST nvmf_interrupt 00:42:32.374 ************************************ 00:42:32.374 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:32.374 * Looking for test storage... 00:42:32.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:32.374 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:32.679 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:32.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:32.680 --rc genhtml_branch_coverage=1 00:42:32.680 --rc genhtml_function_coverage=1 00:42:32.680 --rc genhtml_legend=1 00:42:32.680 --rc geninfo_all_blocks=1 00:42:32.680 --rc geninfo_unexecuted_blocks=1 00:42:32.680 00:42:32.680 ' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:32.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:32.680 --rc genhtml_branch_coverage=1 00:42:32.680 --rc genhtml_function_coverage=1 00:42:32.680 --rc genhtml_legend=1 00:42:32.680 --rc geninfo_all_blocks=1 00:42:32.680 --rc geninfo_unexecuted_blocks=1 00:42:32.680 00:42:32.680 ' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:32.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:32.680 --rc genhtml_branch_coverage=1 00:42:32.680 --rc genhtml_function_coverage=1 00:42:32.680 --rc genhtml_legend=1 00:42:32.680 --rc geninfo_all_blocks=1 00:42:32.680 --rc geninfo_unexecuted_blocks=1 00:42:32.680 00:42:32.680 ' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:32.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:32.680 --rc genhtml_branch_coverage=1 00:42:32.680 --rc genhtml_function_coverage=1 00:42:32.680 --rc genhtml_legend=1 00:42:32.680 --rc geninfo_all_blocks=1 00:42:32.680 --rc geninfo_unexecuted_blocks=1 00:42:32.680 00:42:32.680 ' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.680 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:32.681 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:32.681 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:32.681 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:32.681 21:10:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:32.681 21:10:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:36.017 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:36.017 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:36.017 Found net devices under 0000:84:00.0: cvl_0_0 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:36.017 Found net devices under 0000:84:00.1: cvl_0_1 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:36.017 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:36.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:36.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:42:36.018 00:42:36.018 --- 10.0.0.2 ping statistics --- 00:42:36.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:36.018 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:36.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:36.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:42:36.018 00:42:36.018 --- 10.0.0.1 ping statistics --- 00:42:36.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:36.018 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1922700 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1922700 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1922700 ']' 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:36.018 21:10:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.018 [2024-10-08 21:10:04.537205] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:36.018 [2024-10-08 21:10:04.538824] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:42:36.018 [2024-10-08 21:10:04.538902] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:36.018 [2024-10-08 21:10:04.706470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:36.279 [2024-10-08 21:10:04.944327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:36.279 [2024-10-08 21:10:04.944450] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:36.279 [2024-10-08 21:10:04.944488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:36.279 [2024-10-08 21:10:04.944528] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:36.279 [2024-10-08 21:10:04.944556] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:36.279 [2024-10-08 21:10:04.948704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:36.279 [2024-10-08 21:10:04.948723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:36.539 [2024-10-08 21:10:05.120225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:36.539 [2024-10-08 21:10:05.120289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:36.539 [2024-10-08 21:10:05.120931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:36.539 5000+0 records in 00:42:36.539 5000+0 records out 00:42:36.539 10240000 bytes (10 MB, 9.8 MiB) copied, 0.023073 s, 444 MB/s 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.539 AIO0 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.539 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.539 [2024-10-08 21:10:05.281951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.799 [2024-10-08 21:10:05.334570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1922700 0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 0 idle 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922700 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.65 reactor_0' 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922700 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.65 reactor_0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1922700 1 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 1 idle 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:36.799 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922704 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922704 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1922864 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1922700 0 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1922700 0 busy 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:37.060 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922700 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.65 reactor_0' 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922700 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.65 reactor_0 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:37.319 21:10:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:38.253 21:10:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:38.253 21:10:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.253 21:10:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:38.253 21:10:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922700 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.92 reactor_0' 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922700 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.92 reactor_0 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:38.511 21:10:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1922700 1 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1922700 1 busy 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922704 root 20 0 128.2g 48384 34944 R 93.8 0.1 0:01.31 reactor_1' 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922704 root 20 0 128.2g 48384 34944 R 93.8 0.1 0:01.31 reactor_1 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.512 21:10:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1922864 00:42:48.478 Initializing NVMe Controllers 00:42:48.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:48.478 Controller IO queue size 256, less than required. 00:42:48.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:48.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:48.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:48.478 Initialization complete. Launching workers. 00:42:48.478 ======================================================== 00:42:48.478 Latency(us) 00:42:48.478 Device Information : IOPS MiB/s Average min max 00:42:48.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14356.56 56.08 17842.63 5051.31 23055.13 00:42:48.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14154.76 55.29 18097.27 5126.27 24158.20 00:42:48.478 ======================================================== 00:42:48.478 Total : 28511.33 111.37 17969.05 5051.31 24158.20 00:42:48.478 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1922700 0 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 0 idle 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:48.478 21:10:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922700 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.61 reactor_0' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922700 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.61 reactor_0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1922700 1 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 1 idle 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922704 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.99 reactor_1' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922704 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.99 reactor_1 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:48.478 21:10:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:42:50.387 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:50.387 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:50.387 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:50.387 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1922700 0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 0 idle 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922700 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.80 reactor_0' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922700 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.80 reactor_0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1922700 1 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1922700 1 idle 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1922700 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1922700 -w 256 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1922704 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.06 reactor_1' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1922704 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.06 reactor_1 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:50.388 21:10:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:50.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:50.388 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:50.388 rmmod nvme_tcp 00:42:50.646 rmmod nvme_fabrics 00:42:50.646 rmmod nvme_keyring 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1922700 ']' 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1922700 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1922700 ']' 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1922700 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1922700 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1922700' 00:42:50.646 killing process with pid 1922700 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1922700 00:42:50.646 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1922700 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:51.217 21:10:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.123 21:10:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:53.123 00:42:53.123 real 0m20.763s 00:42:53.123 user 0m38.234s 00:42:53.123 sys 0m8.139s 00:42:53.123 21:10:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:53.123 21:10:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:53.123 ************************************ 00:42:53.123 END TEST nvmf_interrupt 00:42:53.123 ************************************ 00:42:53.123 00:42:53.123 real 32m29.765s 00:42:53.123 user 74m3.212s 00:42:53.123 sys 8m30.575s 00:42:53.123 21:10:21 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:53.123 21:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.123 ************************************ 00:42:53.123 END TEST nvmf_tcp 00:42:53.123 ************************************ 00:42:53.123 21:10:21 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:42:53.123 21:10:21 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:53.123 21:10:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:53.123 21:10:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:53.123 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:42:53.123 ************************************ 00:42:53.123 START TEST spdkcli_nvmf_tcp 00:42:53.123 ************************************ 00:42:53.123 21:10:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:53.384 * Looking for test storage... 00:42:53.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:53.384 21:10:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:53.384 21:10:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:42:53.384 21:10:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:53.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.646 --rc genhtml_branch_coverage=1 00:42:53.646 --rc genhtml_function_coverage=1 00:42:53.646 --rc genhtml_legend=1 00:42:53.646 --rc geninfo_all_blocks=1 00:42:53.646 --rc geninfo_unexecuted_blocks=1 00:42:53.646 00:42:53.646 ' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:53.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.646 --rc genhtml_branch_coverage=1 00:42:53.646 --rc genhtml_function_coverage=1 00:42:53.646 --rc genhtml_legend=1 00:42:53.646 --rc geninfo_all_blocks=1 00:42:53.646 --rc geninfo_unexecuted_blocks=1 00:42:53.646 00:42:53.646 ' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:53.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.646 --rc genhtml_branch_coverage=1 00:42:53.646 --rc genhtml_function_coverage=1 00:42:53.646 --rc genhtml_legend=1 00:42:53.646 --rc geninfo_all_blocks=1 00:42:53.646 --rc geninfo_unexecuted_blocks=1 00:42:53.646 00:42:53.646 ' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:53.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.646 --rc genhtml_branch_coverage=1 00:42:53.646 --rc genhtml_function_coverage=1 00:42:53.646 --rc genhtml_legend=1 00:42:53.646 --rc geninfo_all_blocks=1 00:42:53.646 --rc geninfo_unexecuted_blocks=1 00:42:53.646 00:42:53.646 ' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:53.646 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:53.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1924868 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1924868 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1924868 ']' 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:53.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:53.647 21:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.647 [2024-10-08 21:10:22.323120] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:42:53.647 [2024-10-08 21:10:22.323297] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924868 ] 00:42:53.907 [2024-10-08 21:10:22.457596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:54.167 [2024-10-08 21:10:22.684378] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:54.167 [2024-10-08 21:10:22.684393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.736 21:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:54.736 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:54.736 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:54.736 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:54.736 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:54.736 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:54.736 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:54.736 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:54.736 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:54.736 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:54.736 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:54.736 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:54.736 ' 00:42:58.032 [2024-10-08 21:10:26.559842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:59.409 [2024-10-08 21:10:27.971293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:43:01.946 [2024-10-08 21:10:30.561152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:43:04.480 [2024-10-08 21:10:32.777235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:43:05.856 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:43:05.857 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:43:05.857 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:05.857 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:05.857 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:43:05.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:43:05.857 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:43:05.857 21:10:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:43:06.424 21:10:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:43:06.682 21:10:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:06.683 21:10:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:06.683 21:10:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:43:06.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:43:06.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:06.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:43:06.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:43:06.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:43:06.683 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:43:06.683 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:06.683 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:06.683 ' 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:13.318 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:13.318 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:13.318 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:13.318 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1924868 ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1924868' 00:43:13.318 killing process with pid 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1924868 ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1924868 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1924868 ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1924868 00:43:13.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1924868) - No such process 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1924868 is not found' 00:43:13.318 Process with pid 1924868 is not found 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:13.318 21:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:13.319 00:43:13.319 real 0m20.104s 00:43:13.319 user 0m44.142s 00:43:13.319 sys 0m1.406s 00:43:13.319 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:13.319 21:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:13.319 ************************************ 00:43:13.319 END TEST spdkcli_nvmf_tcp 00:43:13.319 ************************************ 00:43:13.319 21:10:42 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:13.319 21:10:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:13.319 21:10:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:13.319 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:43:13.319 ************************************ 00:43:13.319 START TEST nvmf_identify_passthru 00:43:13.319 ************************************ 00:43:13.319 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:13.578 * Looking for test storage... 00:43:13.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:13.578 21:10:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:13.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.578 --rc genhtml_branch_coverage=1 00:43:13.578 --rc genhtml_function_coverage=1 00:43:13.578 --rc genhtml_legend=1 00:43:13.578 --rc geninfo_all_blocks=1 00:43:13.578 --rc geninfo_unexecuted_blocks=1 00:43:13.578 00:43:13.578 ' 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:13.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.578 --rc genhtml_branch_coverage=1 00:43:13.578 --rc genhtml_function_coverage=1 00:43:13.578 --rc genhtml_legend=1 00:43:13.578 --rc geninfo_all_blocks=1 00:43:13.578 --rc geninfo_unexecuted_blocks=1 00:43:13.578 00:43:13.578 ' 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:13.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.578 --rc genhtml_branch_coverage=1 00:43:13.578 --rc genhtml_function_coverage=1 00:43:13.578 --rc genhtml_legend=1 00:43:13.578 --rc geninfo_all_blocks=1 00:43:13.578 --rc geninfo_unexecuted_blocks=1 00:43:13.578 00:43:13.578 ' 00:43:13.578 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:13.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.578 --rc genhtml_branch_coverage=1 00:43:13.578 --rc genhtml_function_coverage=1 00:43:13.578 --rc genhtml_legend=1 00:43:13.578 --rc geninfo_all_blocks=1 00:43:13.578 --rc geninfo_unexecuted_blocks=1 00:43:13.578 00:43:13.578 ' 00:43:13.578 21:10:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:13.578 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:13.578 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:13.579 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:13.839 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:13.839 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:13.839 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:13.839 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:13.839 21:10:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:13.839 21:10:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:13.839 21:10:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:13.839 21:10:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:13.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:13.840 21:10:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:13.840 21:10:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:13.840 21:10:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:13.840 21:10:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:13.840 21:10:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:13.840 21:10:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.840 21:10:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:13.840 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:13.840 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:13.840 21:10:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:13.840 21:10:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:16.377 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:16.378 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:16.378 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:16.378 Found net devices under 0000:84:00.0: cvl_0_0 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:16.378 Found net devices under 0000:84:00.1: cvl_0_1 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:16.378 21:10:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:16.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:16.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:43:16.378 00:43:16.378 --- 10.0.0.2 ping statistics --- 00:43:16.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:16.378 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:16.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:16.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:43:16.378 00:43:16.378 --- 10.0.0.1 ping statistics --- 00:43:16.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:16.378 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:16.378 21:10:45 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:43:16.639 21:10:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:82:00.0 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:16.639 21:10:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:20.829 21:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:43:21.088 21:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:43:21.088 21:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:21.088 21:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1929787 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:25.276 21:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1929787 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1929787 ']' 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:25.276 21:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.276 [2024-10-08 21:10:54.033764] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:43:25.277 [2024-10-08 21:10:54.033854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:25.534 [2024-10-08 21:10:54.111325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:25.534 [2024-10-08 21:10:54.256187] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:25.534 [2024-10-08 21:10:54.256263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:25.534 [2024-10-08 21:10:54.256284] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:25.534 [2024-10-08 21:10:54.256300] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:25.534 [2024-10-08 21:10:54.256314] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:25.534 [2024-10-08 21:10:54.258618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:25.534 [2024-10-08 21:10:54.258739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:43:25.534 [2024-10-08 21:10:54.258684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:43:25.534 [2024-10-08 21:10:54.258743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:43:25.793 21:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.793 INFO: Log level set to 20 00:43:25.793 INFO: Requests: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "method": "nvmf_set_config", 00:43:25.793 "id": 1, 00:43:25.793 "params": { 00:43:25.793 "admin_cmd_passthru": { 00:43:25.793 "identify_ctrlr": true 00:43:25.793 } 00:43:25.793 } 00:43:25.793 } 00:43:25.793 00:43:25.793 INFO: response: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "id": 1, 00:43:25.793 "result": true 00:43:25.793 } 00:43:25.793 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:25.793 21:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.793 INFO: Setting log level to 20 00:43:25.793 INFO: Setting log level to 20 00:43:25.793 INFO: Log level set to 20 00:43:25.793 INFO: Log level set to 20 00:43:25.793 INFO: Requests: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "method": "framework_start_init", 00:43:25.793 "id": 1 00:43:25.793 } 00:43:25.793 00:43:25.793 INFO: Requests: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "method": "framework_start_init", 00:43:25.793 "id": 1 00:43:25.793 } 00:43:25.793 00:43:25.793 [2024-10-08 21:10:54.469255] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:25.793 INFO: response: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "id": 1, 00:43:25.793 "result": true 00:43:25.793 } 00:43:25.793 00:43:25.793 INFO: response: 00:43:25.793 { 00:43:25.793 "jsonrpc": "2.0", 00:43:25.793 "id": 1, 00:43:25.793 "result": true 00:43:25.793 } 00:43:25.793 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:25.793 21:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.793 INFO: Setting log level to 40 00:43:25.793 INFO: Setting log level to 40 00:43:25.793 INFO: Setting log level to 40 00:43:25.793 [2024-10-08 21:10:54.479479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:25.793 21:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.793 21:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.793 21:10:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.069 Nvme0n1 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.069 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.069 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.069 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.069 [2024-10-08 21:10:57.379666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.069 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:29.069 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.069 [ 00:43:29.069 { 00:43:29.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:29.069 "subtype": "Discovery", 00:43:29.069 "listen_addresses": [], 00:43:29.069 "allow_any_host": true, 00:43:29.069 "hosts": [] 00:43:29.069 }, 00:43:29.069 { 00:43:29.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:29.069 "subtype": "NVMe", 00:43:29.069 "listen_addresses": [ 00:43:29.069 { 00:43:29.069 "trtype": "TCP", 00:43:29.070 "adrfam": "IPv4", 00:43:29.070 "traddr": "10.0.0.2", 00:43:29.070 "trsvcid": "4420" 00:43:29.070 } 00:43:29.070 ], 00:43:29.070 "allow_any_host": true, 00:43:29.070 "hosts": [], 00:43:29.070 "serial_number": "SPDK00000000000001", 00:43:29.070 "model_number": "SPDK bdev Controller", 00:43:29.070 "max_namespaces": 1, 00:43:29.070 "min_cntlid": 1, 00:43:29.070 "max_cntlid": 65519, 00:43:29.070 "namespaces": [ 00:43:29.070 { 00:43:29.070 "nsid": 1, 00:43:29.070 "bdev_name": "Nvme0n1", 00:43:29.070 "name": "Nvme0n1", 00:43:29.070 "nguid": "8C72719864E948EE8C593F25C52CB020", 00:43:29.070 "uuid": "8c727198-64e9-48ee-8c59-3f25c52cb020" 00:43:29.070 } 00:43:29.070 ] 00:43:29.070 } 00:43:29.070 ] 00:43:29.070 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:29.070 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:29.070 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:29.070 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:29.070 21:10:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:29.070 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:29.070 rmmod nvme_tcp 00:43:29.070 rmmod nvme_fabrics 00:43:29.327 rmmod nvme_keyring 00:43:29.327 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:29.327 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:29.327 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:29.327 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1929787 ']' 00:43:29.327 21:10:57 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1929787 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1929787 ']' 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1929787 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929787 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929787' 00:43:29.327 killing process with pid 1929787 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1929787 00:43:29.327 21:10:57 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1929787 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:31.225 21:10:59 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.225 21:10:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:31.225 21:10:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:33.131 21:11:01 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:33.131 00:43:33.131 real 0m19.655s 00:43:33.131 user 0m27.261s 00:43:33.131 sys 0m4.082s 00:43:33.131 21:11:01 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:33.131 21:11:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:33.131 ************************************ 00:43:33.131 END TEST nvmf_identify_passthru 00:43:33.131 ************************************ 00:43:33.131 21:11:01 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:33.131 21:11:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:33.131 21:11:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:33.131 21:11:01 -- common/autotest_common.sh@10 -- # set +x 00:43:33.131 ************************************ 00:43:33.131 START TEST nvmf_dif 00:43:33.131 ************************************ 00:43:33.131 21:11:01 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:33.131 * Looking for test storage... 00:43:33.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:33.131 21:11:01 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:33.131 21:11:01 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:43:33.131 21:11:01 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.390 --rc genhtml_branch_coverage=1 00:43:33.390 --rc genhtml_function_coverage=1 00:43:33.390 --rc genhtml_legend=1 00:43:33.390 --rc geninfo_all_blocks=1 00:43:33.390 --rc geninfo_unexecuted_blocks=1 00:43:33.390 00:43:33.390 ' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.390 --rc genhtml_branch_coverage=1 00:43:33.390 --rc genhtml_function_coverage=1 00:43:33.390 --rc genhtml_legend=1 00:43:33.390 --rc geninfo_all_blocks=1 00:43:33.390 --rc geninfo_unexecuted_blocks=1 00:43:33.390 00:43:33.390 ' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.390 --rc genhtml_branch_coverage=1 00:43:33.390 --rc genhtml_function_coverage=1 00:43:33.390 --rc genhtml_legend=1 00:43:33.390 --rc geninfo_all_blocks=1 00:43:33.390 --rc geninfo_unexecuted_blocks=1 00:43:33.390 00:43:33.390 ' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.390 --rc genhtml_branch_coverage=1 00:43:33.390 --rc genhtml_function_coverage=1 00:43:33.390 --rc genhtml_legend=1 00:43:33.390 --rc geninfo_all_blocks=1 00:43:33.390 --rc geninfo_unexecuted_blocks=1 00:43:33.390 00:43:33.390 ' 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:33.390 21:11:01 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:33.390 21:11:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.390 21:11:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.390 21:11:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.390 21:11:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:33.390 21:11:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:33.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:33.390 21:11:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:33.390 21:11:01 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:33.390 21:11:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:33.390 21:11:02 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:33.390 21:11:02 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:33.390 21:11:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:33.390 21:11:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:36.678 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:36.678 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:36.678 Found net devices under 0000:84:00.0: cvl_0_0 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:36.678 Found net devices under 0000:84:00.1: cvl_0_1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:36.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:36.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:43:36.678 00:43:36.678 --- 10.0.0.2 ping statistics --- 00:43:36.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.678 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:36.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:36.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:43:36.678 00:43:36.678 --- 10.0.0.1 ping statistics --- 00:43:36.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.678 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:43:36.678 21:11:04 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:36.679 21:11:04 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:43:36.679 21:11:04 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:43:36.679 21:11:04 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:37.613 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:37.613 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:37.613 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:37.613 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:37.613 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:37.872 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:37.872 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:37.872 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:37.872 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:37.872 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:37.872 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:37.872 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:37.872 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:37.872 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:37.872 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:37.872 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:37.872 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:37.872 21:11:06 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:37.872 21:11:06 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:37.873 21:11:06 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:37.873 21:11:06 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:37.873 21:11:06 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:37.873 21:11:06 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:38.133 21:11:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:38.133 21:11:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:38.133 21:11:06 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:38.133 21:11:06 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1933199 00:43:38.133 21:11:06 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:38.133 21:11:06 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1933199 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1933199 ']' 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:38.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:38.133 21:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:38.133 [2024-10-08 21:11:06.702747] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:43:38.133 [2024-10-08 21:11:06.702848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:38.133 [2024-10-08 21:11:06.818914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:38.393 [2024-10-08 21:11:07.037398] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:38.393 [2024-10-08 21:11:07.037512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:38.393 [2024-10-08 21:11:07.037549] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:38.393 [2024-10-08 21:11:07.037580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:38.393 [2024-10-08 21:11:07.037605] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:38.393 [2024-10-08 21:11:07.039015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:43:39.333 21:11:07 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 21:11:07 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:39.333 21:11:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:39.333 21:11:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 [2024-10-08 21:11:07.867491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.333 21:11:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 ************************************ 00:43:39.333 START TEST fio_dif_1_default 00:43:39.333 ************************************ 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 bdev_null0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.333 [2024-10-08 21:11:07.953027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:39.333 { 00:43:39.333 "params": { 00:43:39.333 "name": "Nvme$subsystem", 00:43:39.333 "trtype": "$TEST_TRANSPORT", 00:43:39.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.333 "adrfam": "ipv4", 00:43:39.333 "trsvcid": "$NVMF_PORT", 00:43:39.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.333 "hdgst": ${hdgst:-false}, 00:43:39.333 "ddgst": ${ddgst:-false} 00:43:39.333 }, 00:43:39.333 "method": "bdev_nvme_attach_controller" 00:43:39.333 } 00:43:39.333 EOF 00:43:39.333 )") 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:39.333 "params": { 00:43:39.333 "name": "Nvme0", 00:43:39.333 "trtype": "tcp", 00:43:39.333 "traddr": "10.0.0.2", 00:43:39.333 "adrfam": "ipv4", 00:43:39.333 "trsvcid": "4420", 00:43:39.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:39.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:39.333 "hdgst": false, 00:43:39.333 "ddgst": false 00:43:39.333 }, 00:43:39.333 "method": "bdev_nvme_attach_controller" 00:43:39.333 }' 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:39.333 21:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:39.333 21:11:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:39.333 21:11:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:39.333 21:11:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:39.333 21:11:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.900 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:39.900 fio-3.35 00:43:39.900 Starting 1 thread 00:43:52.089 00:43:52.089 filename0: (groupid=0, jobs=1): err= 0: pid=1933554: Tue Oct 8 21:11:19 2024 00:43:52.089 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:43:52.089 slat (nsec): min=4599, max=32400, avg=9971.05, stdev=4110.09 00:43:52.089 clat (usec): min=40774, max=43346, avg=40979.51, stdev=158.68 00:43:52.089 lat (usec): min=40781, max=43360, avg=40989.48, stdev=158.46 00:43:52.089 clat percentiles (usec): 00:43:52.089 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:52.089 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:52.089 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:52.089 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:43:52.089 | 99.99th=[43254] 00:43:52.089 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:43:52.089 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:52.089 lat (msec) : 50=100.00% 00:43:52.089 cpu : usr=91.28%, sys=8.45%, ctx=12, majf=0, minf=9 00:43:52.089 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.089 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.089 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:52.089 00:43:52.089 Run status group 0 (all jobs): 00:43:52.089 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10007-10007msec 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 00:43:52.089 real 0m11.706s 00:43:52.089 user 0m10.545s 00:43:52.089 sys 0m1.330s 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 ************************************ 00:43:52.089 END TEST fio_dif_1_default 00:43:52.089 ************************************ 00:43:52.089 21:11:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:52.089 21:11:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:52.089 21:11:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 ************************************ 00:43:52.089 START TEST fio_dif_1_multi_subsystems 00:43:52.089 ************************************ 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 bdev_null0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 [2024-10-08 21:11:19.721759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 bdev_null1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:52.089 { 00:43:52.089 "params": { 00:43:52.089 "name": "Nvme$subsystem", 00:43:52.089 "trtype": "$TEST_TRANSPORT", 00:43:52.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.089 "adrfam": "ipv4", 00:43:52.089 "trsvcid": "$NVMF_PORT", 00:43:52.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.089 "hdgst": ${hdgst:-false}, 00:43:52.089 "ddgst": ${ddgst:-false} 00:43:52.089 }, 00:43:52.089 "method": "bdev_nvme_attach_controller" 00:43:52.089 } 00:43:52.089 EOF 00:43:52.089 )") 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:52.089 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:52.090 { 00:43:52.090 "params": { 00:43:52.090 "name": "Nvme$subsystem", 00:43:52.090 "trtype": "$TEST_TRANSPORT", 00:43:52.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.090 "adrfam": "ipv4", 00:43:52.090 "trsvcid": "$NVMF_PORT", 00:43:52.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.090 "hdgst": ${hdgst:-false}, 00:43:52.090 "ddgst": ${ddgst:-false} 00:43:52.090 }, 00:43:52.090 "method": "bdev_nvme_attach_controller" 00:43:52.090 } 00:43:52.090 EOF 00:43:52.090 )") 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:52.090 "params": { 00:43:52.090 "name": "Nvme0", 00:43:52.090 "trtype": "tcp", 00:43:52.090 "traddr": "10.0.0.2", 00:43:52.090 "adrfam": "ipv4", 00:43:52.090 "trsvcid": "4420", 00:43:52.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:52.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:52.090 "hdgst": false, 00:43:52.090 "ddgst": false 00:43:52.090 }, 00:43:52.090 "method": "bdev_nvme_attach_controller" 00:43:52.090 },{ 00:43:52.090 "params": { 00:43:52.090 "name": "Nvme1", 00:43:52.090 "trtype": "tcp", 00:43:52.090 "traddr": "10.0.0.2", 00:43:52.090 "adrfam": "ipv4", 00:43:52.090 "trsvcid": "4420", 00:43:52.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:52.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:52.090 "hdgst": false, 00:43:52.090 "ddgst": false 00:43:52.090 }, 00:43:52.090 "method": "bdev_nvme_attach_controller" 00:43:52.090 }' 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:52.090 21:11:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.090 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:52.090 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:52.090 fio-3.35 00:43:52.090 Starting 2 threads 00:44:04.313 00:44:04.313 filename0: (groupid=0, jobs=1): err= 0: pid=1934951: Tue Oct 8 21:11:31 2024 00:44:04.313 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10024msec) 00:44:04.313 slat (nsec): min=7994, max=35353, avg=10934.26, stdev=3338.08 00:44:04.313 clat (usec): min=40920, max=44224, avg=42080.33, stdev=421.16 00:44:04.313 lat (usec): min=40929, max=44260, avg=42091.26, stdev=421.40 00:44:04.313 clat percentiles (usec): 00:44:04.313 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:44:04.313 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:44:04.313 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:44:04.313 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:44:04.313 | 99.99th=[44303] 00:44:04.313 bw ( KiB/s): min= 352, max= 384, per=36.31%, avg=379.20, stdev=11.72, samples=20 00:44:04.313 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:44:04.313 lat (msec) : 50=100.00% 00:44:04.313 cpu : usr=94.72%, sys=4.77%, ctx=15, majf=0, minf=9 00:44:04.313 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.313 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.313 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:04.313 filename1: (groupid=0, jobs=1): err= 0: pid=1934952: Tue Oct 8 21:11:31 2024 00:44:04.313 read: IOPS=166, BW=664KiB/s (680kB/s)(6656KiB/10019msec) 00:44:04.313 slat (nsec): min=8055, max=38594, avg=10855.72, stdev=3277.78 00:44:04.313 clat (usec): min=583, max=43887, avg=24048.43, stdev=20451.83 00:44:04.313 lat (usec): min=592, max=43905, avg=24059.29, stdev=20451.81 00:44:04.313 clat percentiles (usec): 00:44:04.313 | 1.00th=[ 701], 5.00th=[ 1037], 10.00th=[ 1057], 20.00th=[ 1106], 00:44:04.313 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[41681], 60.00th=[41681], 00:44:04.313 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:44:04.313 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:44:04.313 | 99.99th=[43779] 00:44:04.313 bw ( KiB/s): min= 512, max= 768, per=63.61%, avg=664.00, stdev=62.18, samples=20 00:44:04.313 iops : min= 128, max= 192, avg=166.00, stdev=15.55, samples=20 00:44:04.313 lat (usec) : 750=1.68%, 1000=0.96% 00:44:04.313 lat (msec) : 2=41.41%, 4=0.18%, 50=55.77% 00:44:04.313 cpu : usr=94.61%, sys=4.68%, ctx=54, majf=0, minf=9 00:44:04.313 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.313 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.313 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:04.313 00:44:04.313 Run status group 0 (all jobs): 00:44:04.313 READ: bw=1044KiB/s (1069kB/s), 380KiB/s-664KiB/s (389kB/s-680kB/s), io=10.2MiB (10.7MB), run=10019-10024msec 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 00:44:04.313 real 0m11.868s 00:44:04.313 user 0m20.644s 00:44:04.313 sys 0m1.421s 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 ************************************ 00:44:04.313 END TEST fio_dif_1_multi_subsystems 00:44:04.313 ************************************ 00:44:04.313 21:11:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:44:04.313 21:11:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:04.313 21:11:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 ************************************ 00:44:04.313 START TEST fio_dif_rand_params 00:44:04.313 ************************************ 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 bdev_null0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.313 [2024-10-08 21:11:31.672789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:04.313 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:04.314 { 00:44:04.314 "params": { 00:44:04.314 "name": "Nvme$subsystem", 00:44:04.314 "trtype": "$TEST_TRANSPORT", 00:44:04.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.314 "adrfam": "ipv4", 00:44:04.314 "trsvcid": "$NVMF_PORT", 00:44:04.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.314 "hdgst": ${hdgst:-false}, 00:44:04.314 "ddgst": ${ddgst:-false} 00:44:04.314 }, 00:44:04.314 "method": "bdev_nvme_attach_controller" 00:44:04.314 } 00:44:04.314 EOF 00:44:04.314 )") 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:04.314 "params": { 00:44:04.314 "name": "Nvme0", 00:44:04.314 "trtype": "tcp", 00:44:04.314 "traddr": "10.0.0.2", 00:44:04.314 "adrfam": "ipv4", 00:44:04.314 "trsvcid": "4420", 00:44:04.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:04.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:04.314 "hdgst": false, 00:44:04.314 "ddgst": false 00:44:04.314 }, 00:44:04.314 "method": "bdev_nvme_attach_controller" 00:44:04.314 }' 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:04.314 21:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.314 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:04.314 ... 00:44:04.314 fio-3.35 00:44:04.314 Starting 3 threads 00:44:09.584 00:44:09.584 filename0: (groupid=0, jobs=1): err= 0: pid=1936226: Tue Oct 8 21:11:37 2024 00:44:09.584 read: IOPS=148, BW=18.5MiB/s (19.4MB/s)(93.4MiB/5046msec) 00:44:09.584 slat (nsec): min=5676, max=51881, avg=23507.23, stdev=5144.04 00:44:09.584 clat (usec): min=7889, max=72483, avg=20176.92, stdev=12507.96 00:44:09.584 lat (usec): min=7910, max=72507, avg=20200.43, stdev=12507.77 00:44:09.584 clat percentiles (usec): 00:44:09.584 | 1.00th=[ 8291], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11994], 00:44:09.584 | 30.00th=[12649], 40.00th=[13304], 50.00th=[14222], 60.00th=[17171], 00:44:09.584 | 70.00th=[25297], 80.00th=[27132], 90.00th=[30016], 95.00th=[34341], 00:44:09.584 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:44:09.584 | 99.99th=[72877] 00:44:09.584 bw ( KiB/s): min=11008, max=33024, per=35.13%, avg=19072.00, stdev=9126.48, samples=10 00:44:09.584 iops : min= 86, max= 258, avg=149.00, stdev=71.30, samples=10 00:44:09.584 lat (msec) : 10=5.35%, 20=55.42%, 50=34.67%, 100=4.55% 00:44:09.584 cpu : usr=95.60%, sys=3.65%, ctx=14, majf=0, minf=45 00:44:09.584 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.584 filename0: (groupid=0, jobs=1): err= 0: pid=1936227: Tue Oct 8 21:11:37 2024 00:44:09.584 read: IOPS=139, BW=17.5MiB/s (18.3MB/s)(88.2MiB/5044msec) 00:44:09.584 slat (nsec): min=9369, max=58697, avg=25972.23, stdev=10183.59 00:44:09.584 clat (usec): min=9030, max=50985, avg=21342.19, stdev=8957.82 00:44:09.584 lat (usec): min=9046, max=51001, avg=21368.16, stdev=8965.90 00:44:09.584 clat percentiles (usec): 00:44:09.584 | 1.00th=[10028], 5.00th=[11338], 10.00th=[12518], 20.00th=[13829], 00:44:09.584 | 30.00th=[14746], 40.00th=[15270], 50.00th=[16057], 60.00th=[21103], 00:44:09.584 | 70.00th=[29492], 80.00th=[31589], 90.00th=[33817], 95.00th=[35390], 00:44:09.584 | 99.00th=[40633], 99.50th=[42730], 99.90th=[51119], 99.95th=[51119], 00:44:09.584 | 99.99th=[51119] 00:44:09.584 bw ( KiB/s): min=11264, max=27648, per=33.20%, avg=18022.40, stdev=7282.09, samples=10 00:44:09.584 iops : min= 88, max= 216, avg=140.80, stdev=56.89, samples=10 00:44:09.584 lat (msec) : 10=1.13%, 20=56.94%, 50=41.78%, 100=0.14% 00:44:09.584 cpu : usr=94.77%, sys=4.54%, ctx=8, majf=0, minf=27 00:44:09.584 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.584 filename0: (groupid=0, jobs=1): err= 0: pid=1936228: Tue Oct 8 21:11:37 2024 00:44:09.584 read: IOPS=137, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5004msec) 00:44:09.584 slat (nsec): min=5019, max=58486, avg=27203.41, stdev=10194.55 00:44:09.584 clat (usec): min=7612, max=94101, avg=21816.16, stdev=9208.56 00:44:09.584 lat (usec): min=7632, max=94129, avg=21843.36, stdev=9215.50 00:44:09.584 clat percentiles (usec): 00:44:09.584 | 1.00th=[11076], 5.00th=[11994], 10.00th=[12649], 20.00th=[13960], 00:44:09.584 | 30.00th=[15139], 40.00th=[16057], 50.00th=[16909], 60.00th=[21365], 00:44:09.584 | 70.00th=[30016], 80.00th=[31589], 90.00th=[33162], 95.00th=[34341], 00:44:09.584 | 99.00th=[53740], 99.50th=[54789], 99.90th=[93848], 99.95th=[93848], 00:44:09.584 | 99.99th=[93848] 00:44:09.584 bw ( KiB/s): min=11776, max=26624, per=32.26%, avg=17510.40, stdev=6128.81, samples=10 00:44:09.584 iops : min= 92, max= 208, avg=136.80, stdev=47.88, samples=10 00:44:09.584 lat (msec) : 10=0.29%, 20=56.33%, 50=42.21%, 100=1.16% 00:44:09.584 cpu : usr=94.56%, sys=4.70%, ctx=9, majf=0, minf=1 00:44:09.584 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.584 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.584 00:44:09.584 Run status group 0 (all jobs): 00:44:09.584 READ: bw=53.0MiB/s (55.6MB/s), 17.2MiB/s-18.5MiB/s (18.0MB/s-19.4MB/s), io=268MiB (280MB), run=5004-5046msec 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 bdev_null0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 [2024-10-08 21:11:38.398815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 bdev_null1 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 bdev_null2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:09.844 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:09.845 { 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme$subsystem", 00:44:09.845 "trtype": "$TEST_TRANSPORT", 00:44:09.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "$NVMF_PORT", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:09.845 "hdgst": ${hdgst:-false}, 00:44:09.845 "ddgst": ${ddgst:-false} 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 } 00:44:09.845 EOF 00:44:09.845 )") 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:09.845 { 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme$subsystem", 00:44:09.845 "trtype": "$TEST_TRANSPORT", 00:44:09.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "$NVMF_PORT", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:09.845 "hdgst": ${hdgst:-false}, 00:44:09.845 "ddgst": ${ddgst:-false} 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 } 00:44:09.845 EOF 00:44:09.845 )") 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:09.845 { 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme$subsystem", 00:44:09.845 "trtype": "$TEST_TRANSPORT", 00:44:09.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "$NVMF_PORT", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:09.845 "hdgst": ${hdgst:-false}, 00:44:09.845 "ddgst": ${ddgst:-false} 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 } 00:44:09.845 EOF 00:44:09.845 )") 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme0", 00:44:09.845 "trtype": "tcp", 00:44:09.845 "traddr": "10.0.0.2", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "4420", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:09.845 "hdgst": false, 00:44:09.845 "ddgst": false 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 },{ 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme1", 00:44:09.845 "trtype": "tcp", 00:44:09.845 "traddr": "10.0.0.2", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "4420", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:09.845 "hdgst": false, 00:44:09.845 "ddgst": false 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 },{ 00:44:09.845 "params": { 00:44:09.845 "name": "Nvme2", 00:44:09.845 "trtype": "tcp", 00:44:09.845 "traddr": "10.0.0.2", 00:44:09.845 "adrfam": "ipv4", 00:44:09.845 "trsvcid": "4420", 00:44:09.845 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:09.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:09.845 "hdgst": false, 00:44:09.845 "ddgst": false 00:44:09.845 }, 00:44:09.845 "method": "bdev_nvme_attach_controller" 00:44:09.845 }' 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:09.845 21:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:10.104 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:10.104 ... 00:44:10.104 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:10.104 ... 00:44:10.104 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:10.104 ... 00:44:10.104 fio-3.35 00:44:10.104 Starting 24 threads 00:44:22.372 00:44:22.372 filename0: (groupid=0, jobs=1): err= 0: pid=1937210: Tue Oct 8 21:11:50 2024 00:44:22.372 read: IOPS=467, BW=1871KiB/s (1916kB/s)(18.3MiB/10020msec) 00:44:22.372 slat (nsec): min=9548, max=61566, avg=30794.14, stdev=8838.97 00:44:22.372 clat (usec): min=32677, max=45974, avg=33940.89, stdev=1226.15 00:44:22.372 lat (usec): min=32717, max=45993, avg=33971.68, stdev=1225.20 00:44:22.372 clat percentiles (usec): 00:44:22.372 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.372 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.372 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.372 | 99.00th=[38011], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:44:22.372 | 99.99th=[45876] 00:44:22.372 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1868.80, stdev=64.34, samples=20 00:44:22.372 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:44:22.372 lat (msec) : 50=100.00% 00:44:22.372 cpu : usr=98.35%, sys=1.26%, ctx=17, majf=0, minf=9 00:44:22.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.372 filename0: (groupid=0, jobs=1): err= 0: pid=1937211: Tue Oct 8 21:11:50 2024 00:44:22.372 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10029msec) 00:44:22.372 slat (nsec): min=5494, max=59699, avg=20580.54, stdev=8245.83 00:44:22.372 clat (usec): min=12829, max=47380, avg=33806.64, stdev=1787.50 00:44:22.372 lat (usec): min=12839, max=47416, avg=33827.22, stdev=1788.22 00:44:22.372 clat percentiles (usec): 00:44:22.372 | 1.00th=[25560], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.372 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.372 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.372 | 99.00th=[37487], 99.50th=[40109], 99.90th=[42730], 99.95th=[45351], 00:44:22.372 | 99.99th=[47449] 00:44:22.372 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1881.60, stdev=60.18, samples=20 00:44:22.372 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:44:22.372 lat (msec) : 20=0.38%, 50=99.62% 00:44:22.372 cpu : usr=98.12%, sys=1.21%, ctx=104, majf=0, minf=9 00:44:22.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.372 filename0: (groupid=0, jobs=1): err= 0: pid=1937212: Tue Oct 8 21:11:50 2024 00:44:22.372 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10015msec) 00:44:22.372 slat (nsec): min=4787, max=48188, avg=26108.44, stdev=3964.92 00:44:22.372 clat (usec): min=25530, max=47847, avg=33950.53, stdev=1368.31 00:44:22.372 lat (usec): min=25545, max=47876, avg=33976.64, stdev=1367.78 00:44:22.372 clat percentiles (usec): 00:44:22.372 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.372 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.372 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.372 | 99.00th=[39584], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:44:22.372 | 99.99th=[47973] 00:44:22.372 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1866.26, stdev=64.74, samples=19 00:44:22.372 iops : min= 448, max= 480, avg=466.53, stdev=16.23, samples=19 00:44:22.372 lat (msec) : 50=100.00% 00:44:22.372 cpu : usr=98.34%, sys=1.24%, ctx=16, majf=0, minf=9 00:44:22.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.372 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.372 filename0: (groupid=0, jobs=1): err= 0: pid=1937213: Tue Oct 8 21:11:50 2024 00:44:22.372 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.372 slat (usec): min=13, max=101, avg=36.12, stdev=18.12 00:44:22.372 clat (usec): min=16178, max=64179, avg=33782.95, stdev=2361.28 00:44:22.372 lat (usec): min=16195, max=64235, avg=33819.07, stdev=2362.79 00:44:22.372 clat percentiles (usec): 00:44:22.373 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:22.373 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[44303], 99.90th=[64226], 99.95th=[64226], 00:44:22.373 | 99.99th=[64226] 00:44:22.373 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1868.80, stdev=76.58, samples=20 00:44:22.373 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.373 lat (msec) : 20=0.38%, 50=99.27%, 100=0.34% 00:44:22.373 cpu : usr=96.95%, sys=1.90%, ctx=80, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename0: (groupid=0, jobs=1): err= 0: pid=1937214: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10015msec) 00:44:22.373 slat (nsec): min=7629, max=71268, avg=31938.87, stdev=12846.33 00:44:22.373 clat (usec): min=16185, max=65688, avg=33861.79, stdev=2424.46 00:44:22.373 lat (usec): min=16202, max=65704, avg=33893.73, stdev=2424.06 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.373 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[44303], 99.90th=[65799], 99.95th=[65799], 00:44:22.373 | 99.99th=[65799] 00:44:22.373 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1868.80, stdev=76.58, samples=20 00:44:22.373 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.373 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.373 cpu : usr=96.87%, sys=1.99%, ctx=241, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename0: (groupid=0, jobs=1): err= 0: pid=1937215: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=469, BW=1877KiB/s (1923kB/s)(18.4MiB/10022msec) 00:44:22.373 slat (nsec): min=7274, max=66123, avg=21372.12, stdev=11691.64 00:44:22.373 clat (usec): min=22879, max=44719, avg=33924.29, stdev=1402.88 00:44:22.373 lat (usec): min=22893, max=44736, avg=33945.66, stdev=1402.37 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[28443], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.373 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.373 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:44:22.373 | 99.99th=[44827] 00:44:22.373 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1875.20, stdev=62.64, samples=20 00:44:22.373 iops : min= 448, max= 480, avg=468.80, stdev=15.66, samples=20 00:44:22.373 lat (msec) : 50=100.00% 00:44:22.373 cpu : usr=97.27%, sys=1.89%, ctx=88, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename0: (groupid=0, jobs=1): err= 0: pid=1937216: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.373 slat (usec): min=7, max=105, avg=34.54, stdev=17.50 00:44:22.373 clat (usec): min=16200, max=64563, avg=33797.65, stdev=2358.37 00:44:22.373 lat (usec): min=16216, max=64587, avg=33832.19, stdev=2358.71 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:44:22.373 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[44303], 99.90th=[64750], 99.95th=[64750], 00:44:22.373 | 99.99th=[64750] 00:44:22.373 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1868.80, stdev=76.58, samples=20 00:44:22.373 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.373 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.373 cpu : usr=98.26%, sys=1.25%, ctx=41, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename0: (groupid=0, jobs=1): err= 0: pid=1937217: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=469, BW=1876KiB/s (1921kB/s)(18.4MiB/10028msec) 00:44:22.373 slat (usec): min=6, max=113, avg=23.44, stdev=11.25 00:44:22.373 clat (usec): min=22577, max=42649, avg=33911.13, stdev=1389.82 00:44:22.373 lat (usec): min=22673, max=42668, avg=33934.57, stdev=1387.69 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.373 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:44:22.373 | 99.99th=[42730] 00:44:22.373 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1875.35, stdev=62.43, samples=20 00:44:22.373 iops : min= 448, max= 480, avg=468.80, stdev=15.66, samples=20 00:44:22.373 lat (msec) : 50=100.00% 00:44:22.373 cpu : usr=98.26%, sys=1.34%, ctx=19, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename1: (groupid=0, jobs=1): err= 0: pid=1937218: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=467, BW=1871KiB/s (1916kB/s)(18.3MiB/10020msec) 00:44:22.373 slat (nsec): min=5370, max=57093, avg=25694.78, stdev=9501.54 00:44:22.373 clat (usec): min=32772, max=45859, avg=33993.62, stdev=1223.10 00:44:22.373 lat (usec): min=32793, max=45874, avg=34019.32, stdev=1221.73 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.373 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[38011], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:44:22.373 | 99.99th=[45876] 00:44:22.373 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1868.80, stdev=64.34, samples=20 00:44:22.373 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:44:22.373 lat (msec) : 50=100.00% 00:44:22.373 cpu : usr=98.36%, sys=1.24%, ctx=21, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename1: (groupid=0, jobs=1): err= 0: pid=1937219: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=470, BW=1882KiB/s (1928kB/s)(18.4MiB/10030msec) 00:44:22.373 slat (nsec): min=4261, max=60365, avg=19762.86, stdev=9078.62 00:44:22.373 clat (usec): min=6509, max=42712, avg=33837.96, stdev=1936.22 00:44:22.373 lat (usec): min=6514, max=42734, avg=33857.72, stdev=1936.53 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[23987], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.373 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:44:22.373 | 99.99th=[42730] 00:44:22.373 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1881.60, stdev=60.18, samples=20 00:44:22.373 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:44:22.373 lat (msec) : 10=0.15%, 20=0.23%, 50=99.62% 00:44:22.373 cpu : usr=98.44%, sys=1.16%, ctx=15, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename1: (groupid=0, jobs=1): err= 0: pid=1937220: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10028msec) 00:44:22.373 slat (nsec): min=9715, max=54377, avg=24432.58, stdev=9527.59 00:44:22.373 clat (usec): min=9937, max=42584, avg=33780.07, stdev=1906.89 00:44:22.373 lat (usec): min=9948, max=42610, avg=33804.50, stdev=1907.48 00:44:22.373 clat percentiles (usec): 00:44:22.373 | 1.00th=[23987], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.373 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.373 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.373 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:44:22.373 | 99.99th=[42730] 00:44:22.373 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1881.60, stdev=60.18, samples=20 00:44:22.373 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:44:22.373 lat (msec) : 10=0.13%, 20=0.21%, 50=99.66% 00:44:22.373 cpu : usr=98.44%, sys=1.17%, ctx=13, majf=0, minf=9 00:44:22.373 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.373 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.373 filename1: (groupid=0, jobs=1): err= 0: pid=1937221: Tue Oct 8 21:11:50 2024 00:44:22.373 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10017msec) 00:44:22.373 slat (nsec): min=4622, max=52745, avg=30716.01, stdev=7146.08 00:44:22.373 clat (usec): min=32626, max=44785, avg=33910.64, stdev=1129.87 00:44:22.373 lat (usec): min=32646, max=44808, avg=33941.36, stdev=1129.43 00:44:22.373 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.374 | 99.00th=[38011], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:44:22.374 | 99.99th=[44827] 00:44:22.374 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1872.84, stdev=63.44, samples=19 00:44:22.374 iops : min= 448, max= 480, avg=468.21, stdev=15.86, samples=19 00:44:22.374 lat (msec) : 50=100.00% 00:44:22.374 cpu : usr=96.80%, sys=1.87%, ctx=278, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename1: (groupid=0, jobs=1): err= 0: pid=1937222: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=467, BW=1871KiB/s (1916kB/s)(18.3MiB/10020msec) 00:44:22.374 slat (nsec): min=8546, max=60000, avg=23461.55, stdev=10667.40 00:44:22.374 clat (usec): min=24232, max=45801, avg=34014.47, stdev=1270.55 00:44:22.374 lat (usec): min=24251, max=45817, avg=34037.93, stdev=1269.66 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.374 | 99.00th=[38536], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:44:22.374 | 99.99th=[45876] 00:44:22.374 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1868.80, stdev=64.34, samples=20 00:44:22.374 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:44:22.374 lat (msec) : 50=100.00% 00:44:22.374 cpu : usr=98.18%, sys=1.42%, ctx=11, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename1: (groupid=0, jobs=1): err= 0: pid=1937223: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.374 slat (usec): min=4, max=116, avg=27.99, stdev= 8.31 00:44:22.374 clat (usec): min=16169, max=64746, avg=33930.96, stdev=2381.15 00:44:22.374 lat (usec): min=16205, max=64762, avg=33958.96, stdev=2380.20 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.374 | 99.00th=[39584], 99.50th=[44303], 99.90th=[64750], 99.95th=[64750], 00:44:22.374 | 99.99th=[64750] 00:44:22.374 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1868.95, stdev=76.15, samples=20 00:44:22.374 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.374 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.374 cpu : usr=97.09%, sys=1.73%, ctx=289, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename1: (groupid=0, jobs=1): err= 0: pid=1937224: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.374 slat (nsec): min=4665, max=49899, avg=30504.01, stdev=7372.39 00:44:22.374 clat (usec): min=26631, max=44801, avg=33901.77, stdev=1090.93 00:44:22.374 lat (usec): min=26642, max=44821, avg=33932.27, stdev=1090.75 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.374 | 99.00th=[38011], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:44:22.374 | 99.99th=[44827] 00:44:22.374 bw ( KiB/s): min= 1788, max= 1920, per=4.16%, avg=1872.63, stdev=63.73, samples=19 00:44:22.374 iops : min= 447, max= 480, avg=468.16, stdev=15.93, samples=19 00:44:22.374 lat (msec) : 50=100.00% 00:44:22.374 cpu : usr=96.77%, sys=1.95%, ctx=157, majf=0, minf=9 00:44:22.374 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename1: (groupid=0, jobs=1): err= 0: pid=1937225: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.374 slat (nsec): min=5103, max=63187, avg=15620.30, stdev=7628.77 00:44:22.374 clat (usec): min=13674, max=57628, avg=34047.60, stdev=1658.62 00:44:22.374 lat (usec): min=13692, max=57645, avg=34063.22, stdev=1657.93 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33424], 00:44:22.374 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.374 | 99.00th=[40109], 99.50th=[44827], 99.90th=[46400], 99.95th=[47449], 00:44:22.374 | 99.99th=[57410] 00:44:22.374 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1868.80, stdev=62.85, samples=20 00:44:22.374 iops : min= 448, max= 480, avg=467.20, stdev=15.71, samples=20 00:44:22.374 lat (msec) : 20=0.04%, 50=99.91%, 100=0.04% 00:44:22.374 cpu : usr=97.22%, sys=1.66%, ctx=219, majf=0, minf=9 00:44:22.374 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename2: (groupid=0, jobs=1): err= 0: pid=1937226: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10016msec) 00:44:22.374 slat (nsec): min=4943, max=59887, avg=27981.90, stdev=8317.26 00:44:22.374 clat (usec): min=16379, max=66392, avg=33931.98, stdev=2437.40 00:44:22.374 lat (usec): min=16401, max=66406, avg=33959.96, stdev=2436.59 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.374 | 99.00th=[39584], 99.50th=[44303], 99.90th=[66323], 99.95th=[66323], 00:44:22.374 | 99.99th=[66323] 00:44:22.374 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1868.80, stdev=76.58, samples=20 00:44:22.374 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.374 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.374 cpu : usr=98.20%, sys=1.37%, ctx=43, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename2: (groupid=0, jobs=1): err= 0: pid=1937227: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=467, BW=1871KiB/s (1916kB/s)(18.3MiB/10020msec) 00:44:22.374 slat (nsec): min=9346, max=60296, avg=31837.04, stdev=8041.94 00:44:22.374 clat (usec): min=32602, max=45997, avg=33918.65, stdev=1229.32 00:44:22.374 lat (usec): min=32633, max=46016, avg=33950.48, stdev=1228.83 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.374 | 99.00th=[38011], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:44:22.374 | 99.99th=[45876] 00:44:22.374 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1868.80, stdev=64.34, samples=20 00:44:22.374 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:44:22.374 lat (msec) : 50=100.00% 00:44:22.374 cpu : usr=98.43%, sys=1.18%, ctx=14, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename2: (groupid=0, jobs=1): err= 0: pid=1937228: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10006msec) 00:44:22.374 slat (nsec): min=7856, max=59816, avg=25157.20, stdev=10292.16 00:44:22.374 clat (usec): min=23879, max=42599, avg=33899.39, stdev=1234.29 00:44:22.374 lat (usec): min=23889, max=42638, avg=33924.55, stdev=1234.72 00:44:22.374 clat percentiles (usec): 00:44:22.374 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.374 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.374 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.374 | 99.00th=[40109], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:44:22.374 | 99.99th=[42730] 00:44:22.374 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1872.84, stdev=63.44, samples=19 00:44:22.374 iops : min= 448, max= 480, avg=468.21, stdev=15.86, samples=19 00:44:22.374 lat (msec) : 50=100.00% 00:44:22.374 cpu : usr=98.20%, sys=1.35%, ctx=14, majf=0, minf=9 00:44:22.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.374 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.374 filename2: (groupid=0, jobs=1): err= 0: pid=1937229: Tue Oct 8 21:11:50 2024 00:44:22.374 read: IOPS=469, BW=1876KiB/s (1921kB/s)(18.4MiB/10028msec) 00:44:22.374 slat (nsec): min=6986, max=65901, avg=25407.91, stdev=10833.34 00:44:22.374 clat (usec): min=19234, max=47147, avg=33855.29, stdev=1503.23 00:44:22.374 lat (usec): min=19246, max=47162, avg=33880.70, stdev=1503.67 00:44:22.375 clat percentiles (usec): 00:44:22.375 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.375 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.375 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.375 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42730], 99.95th=[46924], 00:44:22.375 | 99.99th=[46924] 00:44:22.375 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1875.35, stdev=62.43, samples=20 00:44:22.375 iops : min= 448, max= 480, avg=468.80, stdev=15.66, samples=20 00:44:22.375 lat (msec) : 20=0.09%, 50=99.91% 00:44:22.375 cpu : usr=98.08%, sys=1.48%, ctx=13, majf=0, minf=9 00:44:22.375 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.375 filename2: (groupid=0, jobs=1): err= 0: pid=1937230: Tue Oct 8 21:11:50 2024 00:44:22.375 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10010msec) 00:44:22.375 slat (nsec): min=6555, max=68534, avg=28927.92, stdev=8772.40 00:44:22.375 clat (usec): min=14810, max=91619, avg=34060.51, stdev=3804.67 00:44:22.375 lat (usec): min=14820, max=91688, avg=34089.44, stdev=3804.41 00:44:22.375 clat percentiles (usec): 00:44:22.375 | 1.00th=[26084], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:44:22.375 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:44:22.375 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:44:22.375 | 99.00th=[42730], 99.50th=[44827], 99.90th=[91751], 99.95th=[91751], 00:44:22.375 | 99.99th=[91751] 00:44:22.375 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1862.55, stdev=93.67, samples=20 00:44:22.375 iops : min= 384, max= 480, avg=465.60, stdev=23.55, samples=20 00:44:22.375 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.375 cpu : usr=98.47%, sys=1.12%, ctx=14, majf=0, minf=11 00:44:22.375 IO depths : 1=1.2%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:44:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.375 filename2: (groupid=0, jobs=1): err= 0: pid=1937231: Tue Oct 8 21:11:50 2024 00:44:22.375 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10014msec) 00:44:22.375 slat (usec): min=6, max=152, avg=30.88, stdev=11.78 00:44:22.375 clat (usec): min=16213, max=64907, avg=33877.62, stdev=2391.23 00:44:22.375 lat (usec): min=16230, max=64923, avg=33908.50, stdev=2390.71 00:44:22.375 clat percentiles (usec): 00:44:22.375 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.375 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.375 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.375 | 99.00th=[39584], 99.50th=[44303], 99.90th=[64750], 99.95th=[64750], 00:44:22.375 | 99.99th=[64750] 00:44:22.375 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1868.95, stdev=76.15, samples=20 00:44:22.375 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.375 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.375 cpu : usr=97.80%, sys=1.45%, ctx=106, majf=0, minf=9 00:44:22.375 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.375 filename2: (groupid=0, jobs=1): err= 0: pid=1937232: Tue Oct 8 21:11:50 2024 00:44:22.375 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10015msec) 00:44:22.375 slat (usec): min=7, max=139, avg=30.09, stdev=12.16 00:44:22.375 clat (usec): min=16265, max=65715, avg=33878.31, stdev=2408.20 00:44:22.375 lat (usec): min=16279, max=65729, avg=33908.40, stdev=2407.78 00:44:22.375 clat percentiles (usec): 00:44:22.375 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:44:22.375 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:44:22.375 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:44:22.375 | 99.00th=[39584], 99.50th=[44303], 99.90th=[65799], 99.95th=[65799], 00:44:22.375 | 99.99th=[65799] 00:44:22.375 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1868.80, stdev=76.58, samples=20 00:44:22.375 iops : min= 416, max= 480, avg=467.20, stdev=19.14, samples=20 00:44:22.375 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:44:22.375 cpu : usr=96.96%, sys=1.88%, ctx=235, majf=0, minf=9 00:44:22.375 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.375 filename2: (groupid=0, jobs=1): err= 0: pid=1937233: Tue Oct 8 21:11:50 2024 00:44:22.375 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10015msec) 00:44:22.375 slat (nsec): min=5495, max=69736, avg=28688.65, stdev=14747.77 00:44:22.375 clat (usec): min=15529, max=65754, avg=33117.28, stdev=4064.75 00:44:22.375 lat (usec): min=15553, max=65769, avg=33145.96, stdev=4067.00 00:44:22.375 clat percentiles (usec): 00:44:22.375 | 1.00th=[21627], 5.00th=[24511], 10.00th=[29230], 20.00th=[33162], 00:44:22.375 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:44:22.375 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:44:22.375 | 99.00th=[44827], 99.50th=[50070], 99.90th=[65799], 99.95th=[65799], 00:44:22.375 | 99.99th=[65799] 00:44:22.375 bw ( KiB/s): min= 1664, max= 2256, per=4.26%, avg=1915.20, stdev=133.88, samples=20 00:44:22.375 iops : min= 416, max= 564, avg=478.80, stdev=33.47, samples=20 00:44:22.375 lat (msec) : 20=0.50%, 50=98.96%, 100=0.54% 00:44:22.375 cpu : usr=96.99%, sys=1.74%, ctx=255, majf=0, minf=9 00:44:22.375 IO depths : 1=0.3%, 2=5.5%, 4=21.4%, 8=60.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:44:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.375 issued rwts: total=4804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:22.375 00:44:22.375 Run status group 0 (all jobs): 00:44:22.375 READ: bw=43.9MiB/s (46.0MB/s), 1867KiB/s-1919KiB/s (1912kB/s-1965kB/s), io=440MiB (462MB), run=10006-10030msec 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.375 bdev_null0 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.375 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 [2024-10-08 21:11:50.650153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 bdev_null1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:22.376 { 00:44:22.376 "params": { 00:44:22.376 "name": "Nvme$subsystem", 00:44:22.376 "trtype": "$TEST_TRANSPORT", 00:44:22.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:22.376 "adrfam": "ipv4", 00:44:22.376 "trsvcid": "$NVMF_PORT", 00:44:22.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:22.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:22.376 "hdgst": ${hdgst:-false}, 00:44:22.376 "ddgst": ${ddgst:-false} 00:44:22.376 }, 00:44:22.376 "method": "bdev_nvme_attach_controller" 00:44:22.376 } 00:44:22.376 EOF 00:44:22.376 )") 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:22.376 { 00:44:22.376 "params": { 00:44:22.376 "name": "Nvme$subsystem", 00:44:22.376 "trtype": "$TEST_TRANSPORT", 00:44:22.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:22.376 "adrfam": "ipv4", 00:44:22.376 "trsvcid": "$NVMF_PORT", 00:44:22.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:22.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:22.376 "hdgst": ${hdgst:-false}, 00:44:22.376 "ddgst": ${ddgst:-false} 00:44:22.376 }, 00:44:22.376 "method": "bdev_nvme_attach_controller" 00:44:22.376 } 00:44:22.376 EOF 00:44:22.376 )") 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:22.376 "params": { 00:44:22.376 "name": "Nvme0", 00:44:22.376 "trtype": "tcp", 00:44:22.376 "traddr": "10.0.0.2", 00:44:22.376 "adrfam": "ipv4", 00:44:22.376 "trsvcid": "4420", 00:44:22.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:22.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:22.376 "hdgst": false, 00:44:22.376 "ddgst": false 00:44:22.376 }, 00:44:22.376 "method": "bdev_nvme_attach_controller" 00:44:22.376 },{ 00:44:22.376 "params": { 00:44:22.376 "name": "Nvme1", 00:44:22.376 "trtype": "tcp", 00:44:22.376 "traddr": "10.0.0.2", 00:44:22.376 "adrfam": "ipv4", 00:44:22.376 "trsvcid": "4420", 00:44:22.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:22.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:22.376 "hdgst": false, 00:44:22.376 "ddgst": false 00:44:22.376 }, 00:44:22.376 "method": "bdev_nvme_attach_controller" 00:44:22.376 }' 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:22.376 21:11:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.376 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:22.376 ... 00:44:22.376 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:22.376 ... 00:44:22.376 fio-3.35 00:44:22.376 Starting 4 threads 00:44:28.943 00:44:28.943 filename0: (groupid=0, jobs=1): err= 0: pid=1938499: Tue Oct 8 21:11:56 2024 00:44:28.943 read: IOPS=1347, BW=10.5MiB/s (11.0MB/s)(52.6MiB/5002msec) 00:44:28.943 slat (nsec): min=4776, max=71548, avg=15881.15, stdev=6791.80 00:44:28.943 clat (usec): min=829, max=18816, avg=5879.16, stdev=2423.37 00:44:28.943 lat (usec): min=842, max=18825, avg=5895.04, stdev=2422.34 00:44:28.943 clat percentiles (usec): 00:44:28.943 | 1.00th=[ 2638], 5.00th=[ 4080], 10.00th=[ 4293], 20.00th=[ 4490], 00:44:28.943 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:44:28.943 | 70.00th=[ 5407], 80.00th=[ 8356], 90.00th=[ 9765], 95.00th=[10290], 00:44:28.943 | 99.00th=[13304], 99.50th=[15270], 99.90th=[18744], 99.95th=[18744], 00:44:28.943 | 99.99th=[18744] 00:44:28.943 bw ( KiB/s): min= 6304, max=13851, per=24.74%, avg=10770.70, stdev=3266.37, samples=10 00:44:28.943 iops : min= 788, max= 1731, avg=1346.30, stdev=408.26, samples=10 00:44:28.943 lat (usec) : 1000=0.04% 00:44:28.943 lat (msec) : 2=0.58%, 4=3.46%, 10=88.22%, 20=7.70% 00:44:28.943 cpu : usr=94.84%, sys=4.54%, ctx=41, majf=0, minf=10 00:44:28.943 IO depths : 1=0.3%, 2=18.2%, 4=55.0%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:28.943 filename0: (groupid=0, jobs=1): err= 0: pid=1938500: Tue Oct 8 21:11:56 2024 00:44:28.943 read: IOPS=1363, BW=10.6MiB/s (11.2MB/s)(53.3MiB/5003msec) 00:44:28.943 slat (nsec): min=5718, max=78063, avg=13916.59, stdev=6171.27 00:44:28.943 clat (usec): min=852, max=16539, avg=5817.17, stdev=2277.53 00:44:28.943 lat (usec): min=867, max=16547, avg=5831.08, stdev=2276.48 00:44:28.943 clat percentiles (usec): 00:44:28.943 | 1.00th=[ 3458], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4490], 00:44:28.943 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:44:28.943 | 70.00th=[ 5080], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10290], 00:44:28.943 | 99.00th=[11338], 99.50th=[12649], 99.90th=[15401], 99.95th=[16450], 00:44:28.943 | 99.99th=[16581] 00:44:28.943 bw ( KiB/s): min= 6288, max=13872, per=25.04%, avg=10899.20, stdev=3281.33, samples=10 00:44:28.943 iops : min= 786, max= 1734, avg=1362.40, stdev=410.17, samples=10 00:44:28.943 lat (usec) : 1000=0.04% 00:44:28.943 lat (msec) : 2=0.37%, 4=3.01%, 10=89.56%, 20=7.02% 00:44:28.943 cpu : usr=95.90%, sys=3.60%, ctx=29, majf=0, minf=0 00:44:28.943 IO depths : 1=0.4%, 2=17.6%, 4=55.6%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 issued rwts: total=6820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:28.943 filename1: (groupid=0, jobs=1): err= 0: pid=1938501: Tue Oct 8 21:11:56 2024 00:44:28.943 read: IOPS=1388, BW=10.8MiB/s (11.4MB/s)(54.3MiB/5005msec) 00:44:28.943 slat (nsec): min=5139, max=51330, avg=13698.44, stdev=6019.89 00:44:28.943 clat (usec): min=1058, max=13903, avg=5716.58, stdev=2190.53 00:44:28.943 lat (usec): min=1090, max=13917, avg=5730.28, stdev=2189.62 00:44:28.943 clat percentiles (usec): 00:44:28.943 | 1.00th=[ 2671], 5.00th=[ 3916], 10.00th=[ 4178], 20.00th=[ 4424], 00:44:28.943 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:44:28.943 | 70.00th=[ 5014], 80.00th=[ 8455], 90.00th=[ 9634], 95.00th=[10159], 00:44:28.943 | 99.00th=[10552], 99.50th=[10945], 99.90th=[12256], 99.95th=[12911], 00:44:28.943 | 99.99th=[13960] 00:44:28.943 bw ( KiB/s): min= 6576, max=14224, per=25.50%, avg=11102.40, stdev=3326.06, samples=10 00:44:28.943 iops : min= 822, max= 1778, avg=1387.80, stdev=415.76, samples=10 00:44:28.943 lat (msec) : 2=0.63%, 4=5.08%, 10=88.54%, 20=5.74% 00:44:28.943 cpu : usr=92.39%, sys=5.76%, ctx=127, majf=0, minf=0 00:44:28.943 IO depths : 1=0.3%, 2=14.1%, 4=57.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 issued rwts: total=6947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:28.943 filename1: (groupid=0, jobs=1): err= 0: pid=1938502: Tue Oct 8 21:11:56 2024 00:44:28.943 read: IOPS=1345, BW=10.5MiB/s (11.0MB/s)(52.6MiB/5001msec) 00:44:28.943 slat (nsec): min=5156, max=74310, avg=15610.83, stdev=7230.77 00:44:28.943 clat (usec): min=876, max=19417, avg=5884.66, stdev=2550.29 00:44:28.943 lat (usec): min=898, max=19426, avg=5900.27, stdev=2548.54 00:44:28.943 clat percentiles (usec): 00:44:28.943 | 1.00th=[ 2311], 5.00th=[ 4047], 10.00th=[ 4293], 20.00th=[ 4490], 00:44:28.943 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:44:28.943 | 70.00th=[ 5211], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10290], 00:44:28.943 | 99.00th=[14746], 99.50th=[16909], 99.90th=[19006], 99.95th=[19268], 00:44:28.943 | 99.99th=[19530] 00:44:28.943 bw ( KiB/s): min= 6336, max=13952, per=23.98%, avg=10439.11, stdev=3371.59, samples=9 00:44:28.943 iops : min= 792, max= 1744, avg=1304.89, stdev=421.45, samples=9 00:44:28.943 lat (usec) : 1000=0.07% 00:44:28.943 lat (msec) : 2=0.68%, 4=3.98%, 10=87.13%, 20=8.13% 00:44:28.943 cpu : usr=95.66%, sys=3.86%, ctx=7, majf=0, minf=9 00:44:28.943 IO depths : 1=0.3%, 2=18.0%, 4=55.2%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:28.943 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:28.943 00:44:28.943 Run status group 0 (all jobs): 00:44:28.943 READ: bw=42.5MiB/s (44.6MB/s), 10.5MiB/s-10.8MiB/s (11.0MB/s-11.4MB/s), io=213MiB (223MB), run=5001-5005msec 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:28.943 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 00:44:28.944 real 0m25.513s 00:44:28.944 user 4m33.900s 00:44:28.944 sys 0m6.558s 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 ************************************ 00:44:28.944 END TEST fio_dif_rand_params 00:44:28.944 ************************************ 00:44:28.944 21:11:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:28.944 21:11:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:28.944 21:11:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 ************************************ 00:44:28.944 START TEST fio_dif_digest 00:44:28.944 ************************************ 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 bdev_null0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:28.944 [2024-10-08 21:11:57.251607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:28.944 { 00:44:28.944 "params": { 00:44:28.944 "name": "Nvme$subsystem", 00:44:28.944 "trtype": "$TEST_TRANSPORT", 00:44:28.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:28.944 "adrfam": "ipv4", 00:44:28.944 "trsvcid": "$NVMF_PORT", 00:44:28.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:28.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:28.944 "hdgst": ${hdgst:-false}, 00:44:28.944 "ddgst": ${ddgst:-false} 00:44:28.944 }, 00:44:28.944 "method": "bdev_nvme_attach_controller" 00:44:28.944 } 00:44:28.944 EOF 00:44:28.944 )") 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:28.944 "params": { 00:44:28.944 "name": "Nvme0", 00:44:28.944 "trtype": "tcp", 00:44:28.944 "traddr": "10.0.0.2", 00:44:28.944 "adrfam": "ipv4", 00:44:28.944 "trsvcid": "4420", 00:44:28.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:28.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:28.944 "hdgst": true, 00:44:28.944 "ddgst": true 00:44:28.944 }, 00:44:28.944 "method": "bdev_nvme_attach_controller" 00:44:28.944 }' 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:28.944 21:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:28.944 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:28.944 ... 00:44:28.944 fio-3.35 00:44:28.944 Starting 3 threads 00:44:41.153 00:44:41.153 filename0: (groupid=0, jobs=1): err= 0: pid=1939320: Tue Oct 8 21:12:08 2024 00:44:41.153 read: IOPS=160, BW=20.1MiB/s (21.1MB/s)(201MiB/10006msec) 00:44:41.154 slat (nsec): min=4807, max=58436, avg=21115.50, stdev=6607.73 00:44:41.154 clat (usec): min=6131, max=46198, avg=18629.87, stdev=5889.14 00:44:41.154 lat (usec): min=6148, max=46242, avg=18650.99, stdev=5894.52 00:44:41.154 clat percentiles (usec): 00:44:41.154 | 1.00th=[13960], 5.00th=[14615], 10.00th=[15008], 20.00th=[15533], 00:44:41.154 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:44:41.154 | 70.00th=[18482], 80.00th=[19792], 90.00th=[21365], 95.00th=[36439], 00:44:41.154 | 99.00th=[40633], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:44:41.154 | 99.99th=[46400] 00:44:41.154 bw ( KiB/s): min= 9984, max=24064, per=32.28%, avg=20559.10, stdev=4350.84, samples=20 00:44:41.154 iops : min= 78, max= 188, avg=160.60, stdev=33.98, samples=20 00:44:41.154 lat (msec) : 10=0.12%, 20=82.04%, 50=17.84% 00:44:41.154 cpu : usr=95.04%, sys=4.39%, ctx=16, majf=0, minf=79 00:44:41.154 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:41.154 filename0: (groupid=0, jobs=1): err= 0: pid=1939321: Tue Oct 8 21:12:08 2024 00:44:41.154 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(218MiB/10043msec) 00:44:41.154 slat (nsec): min=5210, max=52871, avg=22327.47, stdev=4893.58 00:44:41.154 clat (usec): min=11915, max=70403, avg=17254.69, stdev=5127.94 00:44:41.154 lat (usec): min=11936, max=70426, avg=17277.02, stdev=5128.57 00:44:41.154 clat percentiles (usec): 00:44:41.154 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 00:44:41.154 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[16057], 00:44:41.154 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20841], 95.00th=[30278], 00:44:41.154 | 99.00th=[34341], 99.50th=[36439], 99.90th=[63701], 99.95th=[70779], 00:44:41.154 | 99.99th=[70779] 00:44:41.154 bw ( KiB/s): min=12544, max=25856, per=34.95%, avg=22259.20, stdev=4205.26, samples=20 00:44:41.154 iops : min= 98, max= 202, avg=173.90, stdev=32.85, samples=20 00:44:41.154 lat (msec) : 20=88.86%, 50=10.86%, 100=0.29% 00:44:41.154 cpu : usr=93.36%, sys=4.80%, ctx=357, majf=0, minf=83 00:44:41.154 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:41.154 filename0: (groupid=0, jobs=1): err= 0: pid=1939323: Tue Oct 8 21:12:08 2024 00:44:41.154 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(206MiB/10048msec) 00:44:41.154 slat (nsec): min=4881, max=56121, avg=21404.26, stdev=6419.95 00:44:41.154 clat (usec): min=11617, max=52411, avg=18227.99, stdev=5645.77 00:44:41.154 lat (usec): min=11638, max=52428, avg=18249.39, stdev=5650.70 00:44:41.154 clat percentiles (usec): 00:44:41.154 | 1.00th=[13566], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:44:41.154 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:44:41.154 | 70.00th=[17957], 80.00th=[19268], 90.00th=[21365], 95.00th=[34341], 00:44:41.154 | 99.00th=[40109], 99.50th=[40633], 99.90th=[48497], 99.95th=[52167], 00:44:41.154 | 99.99th=[52167] 00:44:41.154 bw ( KiB/s): min=10240, max=24832, per=33.08%, avg=21068.80, stdev=4395.85, samples=20 00:44:41.154 iops : min= 80, max= 194, avg=164.60, stdev=34.34, samples=20 00:44:41.154 lat (msec) : 20=85.57%, 50=14.37%, 100=0.06% 00:44:41.154 cpu : usr=95.39%, sys=4.03%, ctx=15, majf=0, minf=83 00:44:41.154 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.154 issued rwts: total=1649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:41.154 00:44:41.154 Run status group 0 (all jobs): 00:44:41.154 READ: bw=62.2MiB/s (65.2MB/s), 20.1MiB/s-21.7MiB/s (21.1MB/s-22.7MB/s), io=625MiB (655MB), run=10006-10048msec 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:41.154 00:44:41.154 real 0m11.555s 00:44:41.154 user 0m29.862s 00:44:41.154 sys 0m1.767s 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:41.154 21:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:41.154 ************************************ 00:44:41.154 END TEST fio_dif_digest 00:44:41.154 ************************************ 00:44:41.154 21:12:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:41.154 21:12:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:41.154 rmmod nvme_tcp 00:44:41.154 rmmod nvme_fabrics 00:44:41.154 rmmod nvme_keyring 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1933199 ']' 00:44:41.154 21:12:08 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1933199 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1933199 ']' 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1933199 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1933199 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1933199' 00:44:41.154 killing process with pid 1933199 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1933199 00:44:41.154 21:12:08 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1933199 00:44:41.154 21:12:09 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:44:41.154 21:12:09 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:42.089 Waiting for block devices as requested 00:44:42.089 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:44:42.089 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:42.349 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:42.349 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:42.349 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:42.609 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:42.609 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:42.609 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:42.609 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:42.869 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:42.869 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:42.869 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:43.128 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:43.128 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:43.128 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:43.128 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:43.388 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:43.388 21:12:12 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:43.388 21:12:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:43.388 21:12:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:45.931 21:12:14 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:45.931 00:44:45.931 real 1m12.362s 00:44:45.931 user 6m35.524s 00:44:45.931 sys 0m20.021s 00:44:45.931 21:12:14 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:45.931 21:12:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:45.931 ************************************ 00:44:45.931 END TEST nvmf_dif 00:44:45.931 ************************************ 00:44:45.931 21:12:14 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:45.931 21:12:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:45.931 21:12:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:45.931 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:44:45.931 ************************************ 00:44:45.931 START TEST nvmf_abort_qd_sizes 00:44:45.931 ************************************ 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:45.931 * Looking for test storage... 00:44:45.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:45.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:45.931 --rc genhtml_branch_coverage=1 00:44:45.931 --rc genhtml_function_coverage=1 00:44:45.931 --rc genhtml_legend=1 00:44:45.931 --rc geninfo_all_blocks=1 00:44:45.931 --rc geninfo_unexecuted_blocks=1 00:44:45.931 00:44:45.931 ' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:45.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:45.931 --rc genhtml_branch_coverage=1 00:44:45.931 --rc genhtml_function_coverage=1 00:44:45.931 --rc genhtml_legend=1 00:44:45.931 --rc geninfo_all_blocks=1 00:44:45.931 --rc geninfo_unexecuted_blocks=1 00:44:45.931 00:44:45.931 ' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:45.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:45.931 --rc genhtml_branch_coverage=1 00:44:45.931 --rc genhtml_function_coverage=1 00:44:45.931 --rc genhtml_legend=1 00:44:45.931 --rc geninfo_all_blocks=1 00:44:45.931 --rc geninfo_unexecuted_blocks=1 00:44:45.931 00:44:45.931 ' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:45.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:45.931 --rc genhtml_branch_coverage=1 00:44:45.931 --rc genhtml_function_coverage=1 00:44:45.931 --rc genhtml_legend=1 00:44:45.931 --rc geninfo_all_blocks=1 00:44:45.931 --rc geninfo_unexecuted_blocks=1 00:44:45.931 00:44:45.931 ' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:45.931 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:45.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:45.932 21:12:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:49.223 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:49.223 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:49.223 Found net devices under 0000:84:00.0: cvl_0_0 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:49.223 Found net devices under 0000:84:00.1: cvl_0_1 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:49.223 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:49.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:49.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:44:49.224 00:44:49.224 --- 10.0.0.2 ping statistics --- 00:44:49.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.224 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:49.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:49.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:44:49.224 00:44:49.224 --- 10.0.0.1 ping statistics --- 00:44:49.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.224 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:44:49.224 21:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:51.140 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:51.140 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:51.140 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:51.140 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:51.141 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:51.709 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1944957 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1944957 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1944957 ']' 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:51.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:51.968 21:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:51.968 [2024-10-08 21:12:20.716123] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:44:51.968 [2024-10-08 21:12:20.716238] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:52.228 [2024-10-08 21:12:20.838277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:52.487 [2024-10-08 21:12:21.067605] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:52.487 [2024-10-08 21:12:21.067741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:52.487 [2024-10-08 21:12:21.067779] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:52.487 [2024-10-08 21:12:21.067808] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:52.487 [2024-10-08 21:12:21.067835] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:52.487 [2024-10-08 21:12:21.071518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:44:52.487 [2024-10-08 21:12:21.071617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:44:52.487 [2024-10-08 21:12:21.071725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:44:52.487 [2024-10-08 21:12:21.071731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:52.487 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:44:52.744 21:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:52.744 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:52.744 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:52.744 21:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:52.744 ************************************ 00:44:52.744 START TEST spdk_target_abort 00:44:52.744 ************************************ 00:44:52.744 21:12:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:44:52.744 21:12:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:52.744 21:12:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:44:52.744 21:12:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.744 21:12:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.022 spdk_targetn1 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.022 [2024-10-08 21:12:24.126156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.022 [2024-10-08 21:12:24.158729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:56.022 21:12:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:59.301 Initializing NVMe Controllers 00:44:59.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:59.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:59.301 Initialization complete. Launching workers. 00:44:59.301 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11716, failed: 0 00:44:59.301 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1285, failed to submit 10431 00:44:59.301 success 735, unsuccessful 550, failed 0 00:44:59.301 21:12:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:59.301 21:12:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:02.580 Initializing NVMe Controllers 00:45:02.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:02.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:02.580 Initialization complete. Launching workers. 00:45:02.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8669, failed: 0 00:45:02.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7440 00:45:02.580 success 330, unsuccessful 899, failed 0 00:45:02.580 21:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:02.580 21:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:05.859 Initializing NVMe Controllers 00:45:05.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:05.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:05.859 Initialization complete. Launching workers. 00:45:05.859 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31272, failed: 0 00:45:05.859 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2646, failed to submit 28626 00:45:05.859 success 513, unsuccessful 2133, failed 0 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.859 21:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1944957 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1944957 ']' 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1944957 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1944957 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1944957' 00:45:06.791 killing process with pid 1944957 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1944957 00:45:06.791 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1944957 00:45:07.361 00:45:07.361 real 0m14.578s 00:45:07.361 user 0m54.833s 00:45:07.361 sys 0m3.137s 00:45:07.361 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:07.361 21:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:07.361 ************************************ 00:45:07.361 END TEST spdk_target_abort 00:45:07.361 ************************************ 00:45:07.361 21:12:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:07.361 21:12:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:07.361 21:12:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:07.361 21:12:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:07.361 ************************************ 00:45:07.361 START TEST kernel_target_abort 00:45:07.361 ************************************ 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:07.362 21:12:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:09.271 Waiting for block devices as requested 00:45:09.271 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:45:09.271 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:09.271 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:09.530 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:09.530 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:09.530 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:09.790 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:09.790 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:09.790 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:09.790 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:10.049 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:10.049 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:10.049 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:10.310 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:10.310 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:10.310 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:10.594 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:10.594 No valid GPT data, bailing 00:45:10.594 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:45:10.859 00:45:10.859 Discovery Log Number of Records 2, Generation counter 2 00:45:10.859 =====Discovery Log Entry 0====== 00:45:10.859 trtype: tcp 00:45:10.859 adrfam: ipv4 00:45:10.859 subtype: current discovery subsystem 00:45:10.859 treq: not specified, sq flow control disable supported 00:45:10.859 portid: 1 00:45:10.859 trsvcid: 4420 00:45:10.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:10.859 traddr: 10.0.0.1 00:45:10.859 eflags: none 00:45:10.859 sectype: none 00:45:10.859 =====Discovery Log Entry 1====== 00:45:10.859 trtype: tcp 00:45:10.859 adrfam: ipv4 00:45:10.859 subtype: nvme subsystem 00:45:10.859 treq: not specified, sq flow control disable supported 00:45:10.859 portid: 1 00:45:10.859 trsvcid: 4420 00:45:10.859 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:10.859 traddr: 10.0.0.1 00:45:10.859 eflags: none 00:45:10.859 sectype: none 00:45:10.859 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:10.860 21:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.160 Initializing NVMe Controllers 00:45:14.160 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:14.160 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:14.160 Initialization complete. Launching workers. 00:45:14.160 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 20108, failed: 0 00:45:14.160 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20108, failed to submit 0 00:45:14.160 success 0, unsuccessful 20108, failed 0 00:45:14.160 21:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:14.160 21:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:17.451 Initializing NVMe Controllers 00:45:17.451 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:17.451 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:17.451 Initialization complete. Launching workers. 00:45:17.451 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51247, failed: 0 00:45:17.451 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12402, failed to submit 38845 00:45:17.451 success 0, unsuccessful 12402, failed 0 00:45:17.451 21:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:17.451 21:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:20.745 Initializing NVMe Controllers 00:45:20.745 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:20.745 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:20.745 Initialization complete. Launching workers. 00:45:20.745 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34187, failed: 0 00:45:20.745 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8538, failed to submit 25649 00:45:20.745 success 0, unsuccessful 8538, failed 0 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:45:20.745 21:12:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:45:20.745 21:12:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:22.124 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:22.124 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:22.124 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:22.124 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:22.124 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:22.124 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:22.124 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:22.383 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:22.383 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:22.383 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:23.322 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:45:23.322 00:45:23.322 real 0m15.987s 00:45:23.322 user 0m6.938s 00:45:23.322 sys 0m4.395s 00:45:23.322 21:12:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:23.322 21:12:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:23.322 ************************************ 00:45:23.322 END TEST kernel_target_abort 00:45:23.322 ************************************ 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:23.322 21:12:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:23.322 rmmod nvme_tcp 00:45:23.322 rmmod nvme_fabrics 00:45:23.322 rmmod nvme_keyring 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1944957 ']' 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1944957 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1944957 ']' 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1944957 00:45:23.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1944957) - No such process 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1944957 is not found' 00:45:23.322 Process with pid 1944957 is not found 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:45:23.322 21:12:52 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:25.226 Waiting for block devices as requested 00:45:25.226 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:45:25.226 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:25.226 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:25.226 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:25.484 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:25.484 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:25.484 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:25.742 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:25.742 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:25.742 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:25.742 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:26.000 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:26.000 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:26.000 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:26.000 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:26.000 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:26.259 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:26.259 21:12:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:28.793 21:12:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:28.793 00:45:28.793 real 0m42.779s 00:45:28.793 user 1m4.955s 00:45:28.793 sys 0m12.769s 00:45:28.793 21:12:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:28.793 21:12:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:28.793 ************************************ 00:45:28.793 END TEST nvmf_abort_qd_sizes 00:45:28.793 ************************************ 00:45:28.793 21:12:57 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:28.793 21:12:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:28.793 21:12:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:28.793 21:12:57 -- common/autotest_common.sh@10 -- # set +x 00:45:28.793 ************************************ 00:45:28.793 START TEST keyring_file 00:45:28.793 ************************************ 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:28.793 * Looking for test storage... 00:45:28.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:28.793 21:12:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:28.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.793 --rc genhtml_branch_coverage=1 00:45:28.793 --rc genhtml_function_coverage=1 00:45:28.793 --rc genhtml_legend=1 00:45:28.793 --rc geninfo_all_blocks=1 00:45:28.793 --rc geninfo_unexecuted_blocks=1 00:45:28.793 00:45:28.793 ' 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:28.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.793 --rc genhtml_branch_coverage=1 00:45:28.793 --rc genhtml_function_coverage=1 00:45:28.793 --rc genhtml_legend=1 00:45:28.793 --rc geninfo_all_blocks=1 00:45:28.793 --rc geninfo_unexecuted_blocks=1 00:45:28.793 00:45:28.793 ' 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:28.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.793 --rc genhtml_branch_coverage=1 00:45:28.793 --rc genhtml_function_coverage=1 00:45:28.793 --rc genhtml_legend=1 00:45:28.793 --rc geninfo_all_blocks=1 00:45:28.793 --rc geninfo_unexecuted_blocks=1 00:45:28.793 00:45:28.793 ' 00:45:28.793 21:12:57 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:28.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.793 --rc genhtml_branch_coverage=1 00:45:28.793 --rc genhtml_function_coverage=1 00:45:28.793 --rc genhtml_legend=1 00:45:28.793 --rc geninfo_all_blocks=1 00:45:28.793 --rc geninfo_unexecuted_blocks=1 00:45:28.793 00:45:28.793 ' 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:28.794 21:12:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:28.794 21:12:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:28.794 21:12:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:28.794 21:12:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:28.794 21:12:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.794 21:12:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.794 21:12:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.794 21:12:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:28.794 21:12:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:28.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AKoOxxRetC 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AKoOxxRetC 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AKoOxxRetC 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AKoOxxRetC 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VPvBFVyIQZ 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:45:28.794 21:12:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VPvBFVyIQZ 00:45:28.794 21:12:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VPvBFVyIQZ 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VPvBFVyIQZ 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1950871 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:28.794 21:12:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1950871 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1950871 ']' 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:28.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:28.794 21:12:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:29.052 [2024-10-08 21:12:57.582093] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:45:29.052 [2024-10-08 21:12:57.582207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950871 ] 00:45:29.052 [2024-10-08 21:12:57.655820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:29.052 [2024-10-08 21:12:57.781177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:45:29.618 21:12:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:29.618 21:12:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:29.618 21:12:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:29.618 21:12:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.618 21:12:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:29.618 [2024-10-08 21:12:58.231742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:29.618 null0 00:45:29.618 [2024-10-08 21:12:58.265776] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:29.618 [2024-10-08 21:12:58.266688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:29.618 21:12:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.619 21:12:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:29.619 [2024-10-08 21:12:58.293767] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:29.619 request: 00:45:29.619 { 00:45:29.619 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:29.619 "secure_channel": false, 00:45:29.619 "listen_address": { 00:45:29.619 "trtype": "tcp", 00:45:29.619 "traddr": "127.0.0.1", 00:45:29.619 "trsvcid": "4420" 00:45:29.619 }, 00:45:29.619 "method": "nvmf_subsystem_add_listener", 00:45:29.619 "req_id": 1 00:45:29.619 } 00:45:29.619 Got JSON-RPC error response 00:45:29.619 response: 00:45:29.619 { 00:45:29.619 "code": -32602, 00:45:29.619 "message": "Invalid parameters" 00:45:29.619 } 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:29.619 21:12:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=1950998 00:45:29.619 21:12:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:29.619 21:12:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1950998 /var/tmp/bperf.sock 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1950998 ']' 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:29.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:29.619 21:12:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:29.878 [2024-10-08 21:12:58.403173] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:45:29.878 [2024-10-08 21:12:58.403329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950998 ] 00:45:29.878 [2024-10-08 21:12:58.550796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:30.138 [2024-10-08 21:12:58.770776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:30.397 21:12:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:30.397 21:12:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:30.397 21:12:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:30.397 21:12:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:30.965 21:12:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VPvBFVyIQZ 00:45:30.965 21:12:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VPvBFVyIQZ 00:45:31.534 21:13:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:31.534 21:13:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:31.534 21:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.534 21:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.534 21:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.102 21:13:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AKoOxxRetC == \/\t\m\p\/\t\m\p\.\A\K\o\O\x\x\R\e\t\C ]] 00:45:32.102 21:13:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:32.102 21:13:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:32.102 21:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.102 21:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.102 21:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:32.362 21:13:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.VPvBFVyIQZ == \/\t\m\p\/\t\m\p\.\V\P\v\B\F\V\y\I\Q\Z ]] 00:45:32.362 21:13:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:32.362 21:13:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.362 21:13:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.362 21:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.362 21:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.362 21:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.622 21:13:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:32.622 21:13:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:32.622 21:13:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:32.622 21:13:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.622 21:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.622 21:13:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.622 21:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:33.562 21:13:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:33.562 21:13:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:33.562 21:13:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.131 [2024-10-08 21:13:02.618482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:34.131 nvme0n1 00:45:34.131 21:13:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:34.131 21:13:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:34.131 21:13:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.131 21:13:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.131 21:13:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.131 21:13:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.391 21:13:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:34.391 21:13:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:34.391 21:13:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:34.391 21:13:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.391 21:13:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.391 21:13:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:34.391 21:13:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.650 21:13:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:34.650 21:13:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:34.911 Running I/O for 1 seconds... 00:45:35.862 3893.00 IOPS, 15.21 MiB/s 00:45:35.862 Latency(us) 00:45:35.862 [2024-10-08T19:13:04.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.863 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:35.863 nvme0n1 : 1.02 3945.37 15.41 0.00 0.00 32213.21 9903.22 43884.85 00:45:35.863 [2024-10-08T19:13:04.626Z] =================================================================================================================== 00:45:35.863 [2024-10-08T19:13:04.626Z] Total : 3945.37 15.41 0.00 0.00 32213.21 9903.22 43884.85 00:45:35.863 { 00:45:35.863 "results": [ 00:45:35.863 { 00:45:35.863 "job": "nvme0n1", 00:45:35.863 "core_mask": "0x2", 00:45:35.863 "workload": "randrw", 00:45:35.863 "percentage": 50, 00:45:35.863 "status": "finished", 00:45:35.863 "queue_depth": 128, 00:45:35.863 "io_size": 4096, 00:45:35.863 "runtime": 1.019424, 00:45:35.863 "iops": 3945.3652258530306, 00:45:35.863 "mibps": 15.4115829134884, 00:45:35.863 "io_failed": 0, 00:45:35.863 "io_timeout": 0, 00:45:35.863 "avg_latency_us": 32213.212539919336, 00:45:35.863 "min_latency_us": 9903.217777777778, 00:45:35.863 "max_latency_us": 43884.847407407404 00:45:35.863 } 00:45:35.863 ], 00:45:35.863 "core_count": 1 00:45:35.863 } 00:45:35.863 21:13:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:35.863 21:13:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:36.803 21:13:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:36.803 21:13:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:36.803 21:13:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:36.803 21:13:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:36.803 21:13:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:36.803 21:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.373 21:13:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:37.373 21:13:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:37.373 21:13:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:37.373 21:13:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:37.373 21:13:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:37.373 21:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.373 21:13:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:37.945 21:13:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:37.945 21:13:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:37.945 21:13:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:37.945 21:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:38.514 [2024-10-08 21:13:07.155053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:38.514 [2024-10-08 21:13:07.155683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c9f0 (107): Transport endpoint is not connected 00:45:38.514 [2024-10-08 21:13:07.156648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c9f0 (9): Bad file descriptor 00:45:38.514 [2024-10-08 21:13:07.157639] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:38.514 [2024-10-08 21:13:07.157696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:38.514 [2024-10-08 21:13:07.157731] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:38.514 [2024-10-08 21:13:07.157767] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:38.514 request: 00:45:38.514 { 00:45:38.514 "name": "nvme0", 00:45:38.514 "trtype": "tcp", 00:45:38.514 "traddr": "127.0.0.1", 00:45:38.514 "adrfam": "ipv4", 00:45:38.514 "trsvcid": "4420", 00:45:38.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:38.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:38.514 "prchk_reftag": false, 00:45:38.514 "prchk_guard": false, 00:45:38.514 "hdgst": false, 00:45:38.514 "ddgst": false, 00:45:38.514 "psk": "key1", 00:45:38.514 "allow_unrecognized_csi": false, 00:45:38.514 "method": "bdev_nvme_attach_controller", 00:45:38.514 "req_id": 1 00:45:38.514 } 00:45:38.514 Got JSON-RPC error response 00:45:38.514 response: 00:45:38.514 { 00:45:38.514 "code": -5, 00:45:38.514 "message": "Input/output error" 00:45:38.514 } 00:45:38.514 21:13:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:38.514 21:13:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:38.514 21:13:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:38.514 21:13:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:38.514 21:13:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:38.514 21:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:38.514 21:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:38.514 21:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:38.514 21:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:38.514 21:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:39.083 21:13:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:39.083 21:13:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:39.083 21:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:39.083 21:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:39.083 21:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:39.083 21:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.083 21:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:39.343 21:13:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:39.343 21:13:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:39.343 21:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:39.915 21:13:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:39.915 21:13:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:40.486 21:13:09 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:40.486 21:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:40.486 21:13:09 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:41.056 21:13:09 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:41.056 21:13:09 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AKoOxxRetC 00:45:41.056 21:13:09 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:41.056 21:13:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.056 21:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.318 [2024-10-08 21:13:09.985775] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AKoOxxRetC': 0100660 00:45:41.318 [2024-10-08 21:13:09.985856] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:41.318 request: 00:45:41.318 { 00:45:41.318 "name": "key0", 00:45:41.318 "path": "/tmp/tmp.AKoOxxRetC", 00:45:41.318 "method": "keyring_file_add_key", 00:45:41.318 "req_id": 1 00:45:41.318 } 00:45:41.318 Got JSON-RPC error response 00:45:41.318 response: 00:45:41.318 { 00:45:41.318 "code": -1, 00:45:41.318 "message": "Operation not permitted" 00:45:41.318 } 00:45:41.318 21:13:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:41.318 21:13:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:41.318 21:13:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:41.318 21:13:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:41.318 21:13:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AKoOxxRetC 00:45:41.318 21:13:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.318 21:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AKoOxxRetC 00:45:41.951 21:13:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AKoOxxRetC 00:45:41.951 21:13:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:41.951 21:13:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:41.951 21:13:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:41.951 21:13:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:41.951 21:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:41.951 21:13:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:42.521 21:13:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:42.521 21:13:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:42.521 21:13:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:42.521 21:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:43.088 [2024-10-08 21:13:11.594419] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AKoOxxRetC': No such file or directory 00:45:43.088 [2024-10-08 21:13:11.594502] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:43.089 [2024-10-08 21:13:11.594562] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:43.089 [2024-10-08 21:13:11.594593] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:43.089 [2024-10-08 21:13:11.594628] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:43.089 [2024-10-08 21:13:11.594678] bdev_nvme.c:6542:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:43.089 request: 00:45:43.089 { 00:45:43.089 "name": "nvme0", 00:45:43.089 "trtype": "tcp", 00:45:43.089 "traddr": "127.0.0.1", 00:45:43.089 "adrfam": "ipv4", 00:45:43.089 "trsvcid": "4420", 00:45:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:43.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:43.089 "prchk_reftag": false, 00:45:43.089 "prchk_guard": false, 00:45:43.089 "hdgst": false, 00:45:43.089 "ddgst": false, 00:45:43.089 "psk": "key0", 00:45:43.089 "allow_unrecognized_csi": false, 00:45:43.089 "method": "bdev_nvme_attach_controller", 00:45:43.089 "req_id": 1 00:45:43.089 } 00:45:43.089 Got JSON-RPC error response 00:45:43.089 response: 00:45:43.089 { 00:45:43.089 "code": -19, 00:45:43.089 "message": "No such device" 00:45:43.089 } 00:45:43.089 21:13:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:43.089 21:13:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:43.089 21:13:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:43.089 21:13:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:43.089 21:13:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:43.089 21:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:43.349 21:13:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IWeB6UtjfB 00:45:43.349 21:13:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:45:43.349 21:13:11 keyring_file -- nvmf/common.sh@731 -- # python - 00:45:43.349 21:13:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IWeB6UtjfB 00:45:43.349 21:13:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IWeB6UtjfB 00:45:43.349 21:13:12 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.IWeB6UtjfB 00:45:43.349 21:13:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWeB6UtjfB 00:45:43.349 21:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWeB6UtjfB 00:45:43.917 21:13:12 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:43.917 21:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:44.178 nvme0n1 00:45:44.178 21:13:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:44.178 21:13:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:44.178 21:13:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:44.178 21:13:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:44.178 21:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:44.178 21:13:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:45.117 21:13:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:45.117 21:13:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:45.117 21:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:45.376 21:13:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:45.376 21:13:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:45.376 21:13:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:45.376 21:13:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:45.376 21:13:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:45.635 21:13:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:45.635 21:13:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:45.635 21:13:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:45.635 21:13:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:45.635 21:13:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:45.635 21:13:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:45.635 21:13:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:46.204 21:13:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:46.204 21:13:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:46.204 21:13:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:46.775 21:13:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:46.775 21:13:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:46.775 21:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:47.054 21:13:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:47.054 21:13:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWeB6UtjfB 00:45:47.054 21:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWeB6UtjfB 00:45:47.327 21:13:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VPvBFVyIQZ 00:45:47.327 21:13:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VPvBFVyIQZ 00:45:47.902 21:13:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:47.902 21:13:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:48.469 nvme0n1 00:45:48.469 21:13:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:48.469 21:13:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:49.036 21:13:17 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:49.036 "subsystems": [ 00:45:49.036 { 00:45:49.036 "subsystem": "keyring", 00:45:49.036 "config": [ 00:45:49.036 { 00:45:49.036 "method": "keyring_file_add_key", 00:45:49.036 "params": { 00:45:49.036 "name": "key0", 00:45:49.036 "path": "/tmp/tmp.IWeB6UtjfB" 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "keyring_file_add_key", 00:45:49.036 "params": { 00:45:49.036 "name": "key1", 00:45:49.036 "path": "/tmp/tmp.VPvBFVyIQZ" 00:45:49.036 } 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "iobuf", 00:45:49.036 "config": [ 00:45:49.036 { 00:45:49.036 "method": "iobuf_set_options", 00:45:49.036 "params": { 00:45:49.036 "small_pool_count": 8192, 00:45:49.036 "large_pool_count": 1024, 00:45:49.036 "small_bufsize": 8192, 00:45:49.036 "large_bufsize": 135168 00:45:49.036 } 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "sock", 00:45:49.036 "config": [ 00:45:49.036 { 00:45:49.036 "method": "sock_set_default_impl", 00:45:49.036 "params": { 00:45:49.036 "impl_name": "posix" 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "sock_impl_set_options", 00:45:49.036 "params": { 00:45:49.036 "impl_name": "ssl", 00:45:49.036 "recv_buf_size": 4096, 00:45:49.036 "send_buf_size": 4096, 00:45:49.036 "enable_recv_pipe": true, 00:45:49.036 "enable_quickack": false, 00:45:49.036 "enable_placement_id": 0, 00:45:49.036 "enable_zerocopy_send_server": true, 00:45:49.036 "enable_zerocopy_send_client": false, 00:45:49.036 "zerocopy_threshold": 0, 00:45:49.036 "tls_version": 0, 00:45:49.036 "enable_ktls": false 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "sock_impl_set_options", 00:45:49.036 "params": { 00:45:49.036 "impl_name": "posix", 00:45:49.036 "recv_buf_size": 2097152, 00:45:49.036 "send_buf_size": 2097152, 00:45:49.036 "enable_recv_pipe": true, 00:45:49.036 "enable_quickack": false, 00:45:49.036 "enable_placement_id": 0, 00:45:49.036 "enable_zerocopy_send_server": true, 00:45:49.036 "enable_zerocopy_send_client": false, 00:45:49.036 "zerocopy_threshold": 0, 00:45:49.036 "tls_version": 0, 00:45:49.036 "enable_ktls": false 00:45:49.036 } 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "vmd", 00:45:49.036 "config": [] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "accel", 00:45:49.036 "config": [ 00:45:49.036 { 00:45:49.036 "method": "accel_set_options", 00:45:49.036 "params": { 00:45:49.036 "small_cache_size": 128, 00:45:49.036 "large_cache_size": 16, 00:45:49.036 "task_count": 2048, 00:45:49.036 "sequence_count": 2048, 00:45:49.036 "buf_count": 2048 00:45:49.036 } 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "bdev", 00:45:49.036 "config": [ 00:45:49.036 { 00:45:49.036 "method": "bdev_set_options", 00:45:49.036 "params": { 00:45:49.036 "bdev_io_pool_size": 65535, 00:45:49.036 "bdev_io_cache_size": 256, 00:45:49.036 "bdev_auto_examine": true, 00:45:49.036 "iobuf_small_cache_size": 128, 00:45:49.036 "iobuf_large_cache_size": 16 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_raid_set_options", 00:45:49.036 "params": { 00:45:49.036 "process_window_size_kb": 1024, 00:45:49.036 "process_max_bandwidth_mb_sec": 0 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_iscsi_set_options", 00:45:49.036 "params": { 00:45:49.036 "timeout_sec": 30 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_nvme_set_options", 00:45:49.036 "params": { 00:45:49.036 "action_on_timeout": "none", 00:45:49.036 "timeout_us": 0, 00:45:49.036 "timeout_admin_us": 0, 00:45:49.036 "keep_alive_timeout_ms": 10000, 00:45:49.036 "arbitration_burst": 0, 00:45:49.036 "low_priority_weight": 0, 00:45:49.036 "medium_priority_weight": 0, 00:45:49.036 "high_priority_weight": 0, 00:45:49.036 "nvme_adminq_poll_period_us": 10000, 00:45:49.036 "nvme_ioq_poll_period_us": 0, 00:45:49.036 "io_queue_requests": 512, 00:45:49.036 "delay_cmd_submit": true, 00:45:49.036 "transport_retry_count": 4, 00:45:49.036 "bdev_retry_count": 3, 00:45:49.036 "transport_ack_timeout": 0, 00:45:49.036 "ctrlr_loss_timeout_sec": 0, 00:45:49.036 "reconnect_delay_sec": 0, 00:45:49.036 "fast_io_fail_timeout_sec": 0, 00:45:49.036 "disable_auto_failback": false, 00:45:49.036 "generate_uuids": false, 00:45:49.036 "transport_tos": 0, 00:45:49.036 "nvme_error_stat": false, 00:45:49.036 "rdma_srq_size": 0, 00:45:49.036 "io_path_stat": false, 00:45:49.036 "allow_accel_sequence": false, 00:45:49.036 "rdma_max_cq_size": 0, 00:45:49.036 "rdma_cm_event_timeout_ms": 0, 00:45:49.036 "dhchap_digests": [ 00:45:49.036 "sha256", 00:45:49.036 "sha384", 00:45:49.036 "sha512" 00:45:49.036 ], 00:45:49.036 "dhchap_dhgroups": [ 00:45:49.036 "null", 00:45:49.036 "ffdhe2048", 00:45:49.036 "ffdhe3072", 00:45:49.036 "ffdhe4096", 00:45:49.036 "ffdhe6144", 00:45:49.036 "ffdhe8192" 00:45:49.036 ] 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_nvme_attach_controller", 00:45:49.036 "params": { 00:45:49.036 "name": "nvme0", 00:45:49.036 "trtype": "TCP", 00:45:49.036 "adrfam": "IPv4", 00:45:49.036 "traddr": "127.0.0.1", 00:45:49.036 "trsvcid": "4420", 00:45:49.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:49.036 "prchk_reftag": false, 00:45:49.036 "prchk_guard": false, 00:45:49.036 "ctrlr_loss_timeout_sec": 0, 00:45:49.036 "reconnect_delay_sec": 0, 00:45:49.036 "fast_io_fail_timeout_sec": 0, 00:45:49.036 "psk": "key0", 00:45:49.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:49.036 "hdgst": false, 00:45:49.036 "ddgst": false, 00:45:49.036 "multipath": "multipath" 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_nvme_set_hotplug", 00:45:49.036 "params": { 00:45:49.036 "period_us": 100000, 00:45:49.036 "enable": false 00:45:49.036 } 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "method": "bdev_wait_for_examine" 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }, 00:45:49.036 { 00:45:49.036 "subsystem": "nbd", 00:45:49.036 "config": [] 00:45:49.036 } 00:45:49.036 ] 00:45:49.036 }' 00:45:49.036 21:13:17 keyring_file -- keyring/file.sh@115 -- # killprocess 1950998 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1950998 ']' 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1950998 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1950998 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1950998' 00:45:49.036 killing process with pid 1950998 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@969 -- # kill 1950998 00:45:49.036 Received shutdown signal, test time was about 1.000000 seconds 00:45:49.036 00:45:49.036 Latency(us) 00:45:49.036 [2024-10-08T19:13:17.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:49.036 [2024-10-08T19:13:17.799Z] =================================================================================================================== 00:45:49.036 [2024-10-08T19:13:17.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:49.036 21:13:17 keyring_file -- common/autotest_common.sh@974 -- # wait 1950998 00:45:49.295 21:13:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=1953265 00:45:49.295 21:13:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1953265 /var/tmp/bperf.sock 00:45:49.295 21:13:18 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1953265 ']' 00:45:49.295 21:13:18 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:49.295 21:13:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:49.295 21:13:18 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:49.295 21:13:18 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:49.295 21:13:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:49.295 "subsystems": [ 00:45:49.295 { 00:45:49.295 "subsystem": "keyring", 00:45:49.295 "config": [ 00:45:49.295 { 00:45:49.295 "method": "keyring_file_add_key", 00:45:49.295 "params": { 00:45:49.295 "name": "key0", 00:45:49.295 "path": "/tmp/tmp.IWeB6UtjfB" 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "keyring_file_add_key", 00:45:49.295 "params": { 00:45:49.295 "name": "key1", 00:45:49.295 "path": "/tmp/tmp.VPvBFVyIQZ" 00:45:49.295 } 00:45:49.295 } 00:45:49.295 ] 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "subsystem": "iobuf", 00:45:49.295 "config": [ 00:45:49.295 { 00:45:49.295 "method": "iobuf_set_options", 00:45:49.295 "params": { 00:45:49.295 "small_pool_count": 8192, 00:45:49.295 "large_pool_count": 1024, 00:45:49.295 "small_bufsize": 8192, 00:45:49.295 "large_bufsize": 135168 00:45:49.295 } 00:45:49.295 } 00:45:49.295 ] 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "subsystem": "sock", 00:45:49.295 "config": [ 00:45:49.295 { 00:45:49.295 "method": "sock_set_default_impl", 00:45:49.295 "params": { 00:45:49.295 "impl_name": "posix" 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "sock_impl_set_options", 00:45:49.295 "params": { 00:45:49.295 "impl_name": "ssl", 00:45:49.295 "recv_buf_size": 4096, 00:45:49.295 "send_buf_size": 4096, 00:45:49.295 "enable_recv_pipe": true, 00:45:49.295 "enable_quickack": false, 00:45:49.295 "enable_placement_id": 0, 00:45:49.295 "enable_zerocopy_send_server": true, 00:45:49.295 "enable_zerocopy_send_client": false, 00:45:49.295 "zerocopy_threshold": 0, 00:45:49.295 "tls_version": 0, 00:45:49.295 "enable_ktls": false 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "sock_impl_set_options", 00:45:49.295 "params": { 00:45:49.295 "impl_name": "posix", 00:45:49.295 "recv_buf_size": 2097152, 00:45:49.295 "send_buf_size": 2097152, 00:45:49.295 "enable_recv_pipe": true, 00:45:49.295 "enable_quickack": false, 00:45:49.295 "enable_placement_id": 0, 00:45:49.295 "enable_zerocopy_send_server": true, 00:45:49.295 "enable_zerocopy_send_client": false, 00:45:49.295 "zerocopy_threshold": 0, 00:45:49.295 "tls_version": 0, 00:45:49.295 "enable_ktls": false 00:45:49.295 } 00:45:49.295 } 00:45:49.295 ] 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "subsystem": "vmd", 00:45:49.295 "config": [] 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "subsystem": "accel", 00:45:49.295 "config": [ 00:45:49.295 { 00:45:49.295 "method": "accel_set_options", 00:45:49.295 "params": { 00:45:49.295 "small_cache_size": 128, 00:45:49.295 "large_cache_size": 16, 00:45:49.295 "task_count": 2048, 00:45:49.295 "sequence_count": 2048, 00:45:49.295 "buf_count": 2048 00:45:49.295 } 00:45:49.295 } 00:45:49.295 ] 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "subsystem": "bdev", 00:45:49.295 "config": [ 00:45:49.295 { 00:45:49.295 "method": "bdev_set_options", 00:45:49.295 "params": { 00:45:49.295 "bdev_io_pool_size": 65535, 00:45:49.295 "bdev_io_cache_size": 256, 00:45:49.295 "bdev_auto_examine": true, 00:45:49.295 "iobuf_small_cache_size": 128, 00:45:49.295 "iobuf_large_cache_size": 16 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "bdev_raid_set_options", 00:45:49.295 "params": { 00:45:49.295 "process_window_size_kb": 1024, 00:45:49.295 "process_max_bandwidth_mb_sec": 0 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "bdev_iscsi_set_options", 00:45:49.295 "params": { 00:45:49.295 "timeout_sec": 30 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "bdev_nvme_set_options", 00:45:49.295 "params": { 00:45:49.295 "action_on_timeout": "none", 00:45:49.295 "timeout_us": 0, 00:45:49.295 "timeout_admin_us": 0, 00:45:49.295 "keep_alive_timeout_ms": 10000, 00:45:49.295 "arbitration_burst": 0, 00:45:49.295 "low_priority_weight": 0, 00:45:49.295 "medium_priority_weight": 0, 00:45:49.295 "high_priority_weight": 0, 00:45:49.295 "nvme_adminq_poll_period_us": 10000, 00:45:49.295 "nvme_ioq_poll_period_us": 0, 00:45:49.295 "io_queue_requests": 512, 00:45:49.295 "delay_cmd_submit": true, 00:45:49.295 "transport_retry_count": 4, 00:45:49.295 "bdev_retry_count": 3, 00:45:49.295 "transport_ack_timeout": 0, 00:45:49.295 "ctrlr_loss_timeout_sec": 0, 00:45:49.295 "reconnect_delay_sec": 0, 00:45:49.295 "fast_io_fail_timeout_sec": 0, 00:45:49.295 "disable_auto_failback": false, 00:45:49.295 "generate_uuids": false, 00:45:49.295 "transport_tos": 0, 00:45:49.295 "nvme_error_stat": false, 00:45:49.295 "rdma_srq_size": 0, 00:45:49.295 "io_path_stat": false, 00:45:49.295 "allow_accel_sequence": false, 00:45:49.295 "rdma_max_cq_size": 0, 00:45:49.295 "rdma_cm_event_timeout_ms": 0, 00:45:49.295 "dhchap_digests": [ 00:45:49.295 "sha256", 00:45:49.295 "sha384", 00:45:49.295 "sha512" 00:45:49.295 ], 00:45:49.295 "dhchap_dhgroups": [ 00:45:49.295 "null", 00:45:49.295 "ffdhe2048", 00:45:49.295 "ffdhe3072", 00:45:49.295 "ffdhe4096", 00:45:49.295 "ffdhe6144", 00:45:49.295 "ffdhe8192" 00:45:49.295 ] 00:45:49.295 } 00:45:49.295 }, 00:45:49.295 { 00:45:49.295 "method": "bdev_nvme_attach_controller", 00:45:49.295 "params": { 00:45:49.295 "name": "nvme0", 00:45:49.295 "trtype": "TCP", 00:45:49.296 "adrfam": "IPv4", 00:45:49.296 "traddr": "127.0.0.1", 00:45:49.296 "trsvcid": "4420", 00:45:49.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:49.296 "prchk_reftag": false, 00:45:49.296 "prchk_guard": false, 00:45:49.296 "ctrlr_loss_timeout_sec": 0, 00:45:49.296 "reconnect_delay_sec": 0, 00:45:49.296 "fast_io_fail_timeout_sec": 0, 00:45:49.296 "psk": "key0", 00:45:49.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:49.296 "hdgst": false, 00:45:49.296 "ddgst": false, 00:45:49.296 "multipath": "multipath" 00:45:49.296 } 00:45:49.296 }, 00:45:49.296 { 00:45:49.296 "method": "bdev_nvme_set_hotplug", 00:45:49.296 "params": { 00:45:49.296 "period_us": 100000, 00:45:49.296 "enable": false 00:45:49.296 } 00:45:49.296 }, 00:45:49.296 { 00:45:49.296 "method": "bdev_wait_for_examine" 00:45:49.296 } 00:45:49.296 ] 00:45:49.296 }, 00:45:49.296 { 00:45:49.296 "subsystem": "nbd", 00:45:49.296 "config": [] 00:45:49.296 } 00:45:49.296 ] 00:45:49.296 }' 00:45:49.296 21:13:18 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:49.296 21:13:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:49.554 [2024-10-08 21:13:18.068408] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:45:49.554 [2024-10-08 21:13:18.068512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953265 ] 00:45:49.554 [2024-10-08 21:13:18.176493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:49.812 [2024-10-08 21:13:18.393735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:50.070 [2024-10-08 21:13:18.647184] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:50.637 21:13:19 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:50.637 21:13:19 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:50.637 21:13:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:50.637 21:13:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:50.637 21:13:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:50.895 21:13:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:50.895 21:13:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:50.895 21:13:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:50.895 21:13:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:50.895 21:13:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:50.895 21:13:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:50.895 21:13:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:51.464 21:13:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:51.464 21:13:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:51.464 21:13:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:51.464 21:13:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:51.464 21:13:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:51.464 21:13:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:51.464 21:13:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:52.034 21:13:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:52.034 21:13:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:52.034 21:13:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:52.034 21:13:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:52.604 21:13:21 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:52.604 21:13:21 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:52.604 21:13:21 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IWeB6UtjfB /tmp/tmp.VPvBFVyIQZ 00:45:52.604 21:13:21 keyring_file -- keyring/file.sh@20 -- # killprocess 1953265 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1953265 ']' 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1953265 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953265 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953265' 00:45:52.604 killing process with pid 1953265 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@969 -- # kill 1953265 00:45:52.604 Received shutdown signal, test time was about 1.000000 seconds 00:45:52.604 00:45:52.604 Latency(us) 00:45:52.604 [2024-10-08T19:13:21.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:52.604 [2024-10-08T19:13:21.367Z] =================================================================================================================== 00:45:52.604 [2024-10-08T19:13:21.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:52.604 21:13:21 keyring_file -- common/autotest_common.sh@974 -- # wait 1953265 00:45:52.863 21:13:21 keyring_file -- keyring/file.sh@21 -- # killprocess 1950871 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1950871 ']' 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1950871 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1950871 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1950871' 00:45:52.863 killing process with pid 1950871 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@969 -- # kill 1950871 00:45:52.863 21:13:21 keyring_file -- common/autotest_common.sh@974 -- # wait 1950871 00:45:53.803 00:45:53.803 real 0m25.190s 00:45:53.803 user 1m5.066s 00:45:53.803 sys 0m5.027s 00:45:53.803 21:13:22 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:53.803 21:13:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:53.803 ************************************ 00:45:53.803 END TEST keyring_file 00:45:53.803 ************************************ 00:45:53.803 21:13:22 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:45:53.803 21:13:22 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:53.803 21:13:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:53.803 21:13:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:53.803 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:45:53.803 ************************************ 00:45:53.803 START TEST keyring_linux 00:45:53.803 ************************************ 00:45:53.803 21:13:22 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:53.803 Joined session keyring: 585401506 00:45:53.803 * Looking for test storage... 00:45:53.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:53.803 21:13:22 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:53.803 21:13:22 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:45:53.803 21:13:22 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:54.065 21:13:22 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:54.065 21:13:22 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:54.065 21:13:22 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:54.066 21:13:22 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.066 --rc genhtml_branch_coverage=1 00:45:54.066 --rc genhtml_function_coverage=1 00:45:54.066 --rc genhtml_legend=1 00:45:54.066 --rc geninfo_all_blocks=1 00:45:54.066 --rc geninfo_unexecuted_blocks=1 00:45:54.066 00:45:54.066 ' 00:45:54.066 21:13:22 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.066 --rc genhtml_branch_coverage=1 00:45:54.066 --rc genhtml_function_coverage=1 00:45:54.066 --rc genhtml_legend=1 00:45:54.066 --rc geninfo_all_blocks=1 00:45:54.066 --rc geninfo_unexecuted_blocks=1 00:45:54.066 00:45:54.066 ' 00:45:54.066 21:13:22 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.066 --rc genhtml_branch_coverage=1 00:45:54.066 --rc genhtml_function_coverage=1 00:45:54.066 --rc genhtml_legend=1 00:45:54.066 --rc geninfo_all_blocks=1 00:45:54.066 --rc geninfo_unexecuted_blocks=1 00:45:54.066 00:45:54.066 ' 00:45:54.066 21:13:22 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.066 --rc genhtml_branch_coverage=1 00:45:54.066 --rc genhtml_function_coverage=1 00:45:54.066 --rc genhtml_legend=1 00:45:54.066 --rc geninfo_all_blocks=1 00:45:54.066 --rc geninfo_unexecuted_blocks=1 00:45:54.066 00:45:54.066 ' 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:54.066 21:13:22 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:54.066 21:13:22 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:54.066 21:13:22 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:54.066 21:13:22 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:54.066 21:13:22 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.066 21:13:22 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.066 21:13:22 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.066 21:13:22 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:54.066 21:13:22 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:54.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@731 -- # python - 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:54.066 /tmp/:spdk-test:key0 00:45:54.066 21:13:22 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:45:54.066 21:13:22 keyring_linux -- nvmf/common.sh@731 -- # python - 00:45:54.066 21:13:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:54.327 21:13:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:54.327 /tmp/:spdk-test:key1 00:45:54.327 21:13:22 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1953892 00:45:54.327 21:13:22 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:54.327 21:13:22 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1953892 00:45:54.327 21:13:22 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1953892 ']' 00:45:54.328 21:13:22 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:54.328 21:13:22 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:54.328 21:13:22 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:54.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:54.328 21:13:22 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:54.328 21:13:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:54.328 [2024-10-08 21:13:22.954055] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:45:54.328 [2024-10-08 21:13:22.954250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953892 ] 00:45:54.328 [2024-10-08 21:13:23.090085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:54.589 [2024-10-08 21:13:23.305270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:55.159 [2024-10-08 21:13:23.786730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:55.159 null0 00:45:55.159 [2024-10-08 21:13:23.819770] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:55.159 [2024-10-08 21:13:23.820723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:55.159 503809076 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:55.159 522616638 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1954025 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:55.159 21:13:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1954025 /var/tmp/bperf.sock 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1954025 ']' 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:55.159 21:13:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:55.418 [2024-10-08 21:13:23.944643] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:45:55.418 [2024-10-08 21:13:23.944813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954025 ] 00:45:55.418 [2024-10-08 21:13:24.092027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:55.678 [2024-10-08 21:13:24.307953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.618 21:13:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:56.618 21:13:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:56.618 21:13:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:56.618 21:13:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:57.186 21:13:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:57.186 21:13:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:57.757 21:13:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:57.757 21:13:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:58.324 [2024-10-08 21:13:26.830613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:58.324 nvme0n1 00:45:58.324 21:13:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:58.324 21:13:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:58.324 21:13:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:58.324 21:13:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:58.324 21:13:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:58.324 21:13:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:58.895 21:13:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:58.895 21:13:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:58.895 21:13:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:58.895 21:13:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:58.895 21:13:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:58.895 21:13:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:58.895 21:13:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@25 -- # sn=503809076 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 503809076 == \5\0\3\8\0\9\0\7\6 ]] 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 503809076 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:59.837 21:13:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:59.837 Running I/O for 1 seconds... 00:46:00.781 4253.00 IOPS, 16.61 MiB/s 00:46:00.781 Latency(us) 00:46:00.781 [2024-10-08T19:13:29.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:00.781 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:00.781 nvme0n1 : 1.03 4260.17 16.64 0.00 0.00 29610.28 10097.40 38836.15 00:46:00.781 [2024-10-08T19:13:29.544Z] =================================================================================================================== 00:46:00.781 [2024-10-08T19:13:29.544Z] Total : 4260.17 16.64 0.00 0.00 29610.28 10097.40 38836.15 00:46:00.781 { 00:46:00.781 "results": [ 00:46:00.781 { 00:46:00.781 "job": "nvme0n1", 00:46:00.781 "core_mask": "0x2", 00:46:00.781 "workload": "randread", 00:46:00.781 "status": "finished", 00:46:00.781 "queue_depth": 128, 00:46:00.781 "io_size": 4096, 00:46:00.781 "runtime": 1.028598, 00:46:00.781 "iops": 4260.167723444923, 00:46:00.781 "mibps": 16.64128016970673, 00:46:00.781 "io_failed": 0, 00:46:00.781 "io_timeout": 0, 00:46:00.781 "avg_latency_us": 29610.275290498164, 00:46:00.781 "min_latency_us": 10097.39851851852, 00:46:00.781 "max_latency_us": 38836.148148148146 00:46:00.781 } 00:46:00.781 ], 00:46:00.781 "core_count": 1 00:46:00.781 } 00:46:00.781 21:13:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:00.781 21:13:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:01.041 21:13:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:01.041 21:13:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:01.041 21:13:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:01.041 21:13:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:01.041 21:13:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:01.041 21:13:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:01.984 21:13:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:01.984 21:13:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:01.984 21:13:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:01.984 21:13:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:01.985 21:13:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:01.985 [2024-10-08 21:13:30.703027] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:01.985 [2024-10-08 21:13:30.703793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195e7a0 (107): Transport endpoint is not connected 00:46:01.985 [2024-10-08 21:13:30.704774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195e7a0 (9): Bad file descriptor 00:46:01.985 [2024-10-08 21:13:30.705768] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:01.985 [2024-10-08 21:13:30.705825] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:01.985 [2024-10-08 21:13:30.705861] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:01.985 [2024-10-08 21:13:30.705899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:01.985 request: 00:46:01.985 { 00:46:01.985 "name": "nvme0", 00:46:01.985 "trtype": "tcp", 00:46:01.985 "traddr": "127.0.0.1", 00:46:01.985 "adrfam": "ipv4", 00:46:01.985 "trsvcid": "4420", 00:46:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:01.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:01.985 "prchk_reftag": false, 00:46:01.985 "prchk_guard": false, 00:46:01.985 "hdgst": false, 00:46:01.985 "ddgst": false, 00:46:01.985 "psk": ":spdk-test:key1", 00:46:01.985 "allow_unrecognized_csi": false, 00:46:01.985 "method": "bdev_nvme_attach_controller", 00:46:01.985 "req_id": 1 00:46:01.985 } 00:46:01.985 Got JSON-RPC error response 00:46:01.985 response: 00:46:01.985 { 00:46:01.985 "code": -5, 00:46:01.985 "message": "Input/output error" 00:46:01.985 } 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:01.985 21:13:30 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@33 -- # sn=503809076 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 503809076 00:46:01.985 1 links removed 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:01.985 21:13:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:02.245 21:13:30 keyring_linux -- keyring/linux.sh@33 -- # sn=522616638 00:46:02.245 21:13:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 522616638 00:46:02.245 1 links removed 00:46:02.245 21:13:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1954025 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1954025 ']' 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1954025 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1954025 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1954025' 00:46:02.245 killing process with pid 1954025 00:46:02.245 21:13:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 1954025 00:46:02.245 Received shutdown signal, test time was about 1.000000 seconds 00:46:02.245 00:46:02.245 Latency(us) 00:46:02.245 [2024-10-08T19:13:31.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:02.245 [2024-10-08T19:13:31.008Z] =================================================================================================================== 00:46:02.245 [2024-10-08T19:13:31.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:02.246 21:13:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 1954025 00:46:02.507 21:13:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1953892 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1953892 ']' 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1953892 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953892 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953892' 00:46:02.507 killing process with pid 1953892 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1953892 00:46:02.507 21:13:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1953892 00:46:03.078 00:46:03.078 real 0m9.397s 00:46:03.078 user 0m19.815s 00:46:03.078 sys 0m2.530s 00:46:03.078 21:13:31 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:03.078 21:13:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:03.078 ************************************ 00:46:03.078 END TEST keyring_linux 00:46:03.078 ************************************ 00:46:03.078 21:13:31 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:03.078 21:13:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:46:03.078 21:13:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:03.078 21:13:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:03.078 21:13:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:46:03.078 21:13:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:46:03.078 21:13:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:46:03.078 21:13:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:03.078 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:46:03.078 21:13:31 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:46:03.078 21:13:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:46:03.078 21:13:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:46:03.078 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:46:06.377 INFO: APP EXITING 00:46:06.377 INFO: killing all VMs 00:46:06.377 INFO: killing vhost app 00:46:06.377 INFO: EXIT DONE 00:46:08.286 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:46:08.287 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:46:08.287 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:46:08.287 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:46:08.287 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:46:08.287 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:46:08.287 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:46:08.287 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:46:08.287 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:46:08.287 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:46:08.287 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:46:08.287 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:46:08.287 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:46:08.287 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:46:08.287 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:46:08.287 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:46:08.287 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:46:10.196 Cleaning 00:46:10.196 Removing: /var/run/dpdk/spdk0/config 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:46:10.196 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:10.196 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:10.197 Removing: /var/run/dpdk/spdk1/config 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:46:10.197 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:46:10.197 Removing: /var/run/dpdk/spdk1/hugepage_info 00:46:10.197 Removing: /var/run/dpdk/spdk2/config 00:46:10.197 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:46:10.457 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:46:10.457 Removing: /var/run/dpdk/spdk2/hugepage_info 00:46:10.457 Removing: /var/run/dpdk/spdk3/config 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:46:10.457 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:46:10.457 Removing: /var/run/dpdk/spdk3/hugepage_info 00:46:10.457 Removing: /var/run/dpdk/spdk4/config 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:46:10.457 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:46:10.457 Removing: /var/run/dpdk/spdk4/hugepage_info 00:46:10.457 Removing: /dev/shm/bdev_svc_trace.1 00:46:10.457 Removing: /dev/shm/nvmf_trace.0 00:46:10.457 Removing: /dev/shm/spdk_tgt_trace.pid1582371 00:46:10.457 Removing: /var/run/dpdk/spdk0 00:46:10.457 Removing: /var/run/dpdk/spdk1 00:46:10.457 Removing: /var/run/dpdk/spdk2 00:46:10.457 Removing: /var/run/dpdk/spdk3 00:46:10.457 Removing: /var/run/dpdk/spdk4 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1580547 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1581421 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1582371 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1583040 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1583717 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1583926 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1584751 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1584886 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1585270 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1587319 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1588423 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1588872 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1589204 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1589545 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1589871 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1590036 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1590192 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1590507 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1591083 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1594502 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1594930 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1595222 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1595364 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1596056 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1596167 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1596755 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1596895 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1597192 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1597330 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1597570 00:46:10.457 Removing: /var/run/dpdk/spdk_pid1597630 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1598261 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1598420 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1598638 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1601279 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1604190 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1611783 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1612350 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1615428 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1615697 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1618621 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1623004 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1625964 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1633488 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1639150 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1640471 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1641140 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1653090 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1655509 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1685473 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1688802 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1693681 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1698198 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1698202 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1698733 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1699385 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1699924 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1700321 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1700442 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1700575 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1700721 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1700749 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1701375 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1701932 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1702560 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1702957 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1702959 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1703220 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1704497 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1705366 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1710831 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1756316 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1759898 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1760954 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1762403 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1762801 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1763079 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1763349 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1764060 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1765494 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1766880 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1767566 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1769453 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1770037 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1770701 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1773489 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1777045 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1777046 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1777047 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1779407 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1785053 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1787815 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1791728 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1792680 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1793903 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1794984 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1797985 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1800462 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1805100 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1805108 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1808180 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1808392 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1808566 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1808829 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1808838 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1811875 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1812285 00:46:10.718 Removing: /var/run/dpdk/spdk_pid1815151 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1817234 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1821648 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1825688 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1833761 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1838258 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1838260 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1854892 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1855521 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1856094 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1856622 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1857462 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1858014 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1858555 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1859214 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1861995 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1862134 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1866074 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1866248 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1869713 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1872654 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1880582 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1881094 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1883750 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1883919 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1886935 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1891067 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1893960 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1901261 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1906748 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1907932 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1908643 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1920470 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1922864 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1924868 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1930192 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1930202 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1933373 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1934771 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1936163 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1936908 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1938314 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1939197 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1945365 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1945648 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1946042 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1947708 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1948007 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1948400 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1950871 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1950998 00:46:10.977 Removing: /var/run/dpdk/spdk_pid1953265 00:46:10.978 Removing: /var/run/dpdk/spdk_pid1953892 00:46:10.978 Removing: /var/run/dpdk/spdk_pid1954025 00:46:10.978 Clean 00:46:11.238 21:13:39 -- common/autotest_common.sh@1451 -- # return 0 00:46:11.238 21:13:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:46:11.238 21:13:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:11.238 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:46:11.238 21:13:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:46:11.238 21:13:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:11.238 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:46:11.238 21:13:39 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:11.238 21:13:39 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:46:11.238 21:13:39 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:46:11.238 21:13:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:46:11.238 21:13:39 -- spdk/autotest.sh@394 -- # hostname 00:46:11.238 21:13:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:46:11.498 geninfo: WARNING: invalid characters removed from testname! 00:47:07.728 21:14:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:13.063 21:14:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:17.249 21:14:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:21.433 21:14:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:25.619 21:14:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:30.885 21:14:58 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:35.082 21:15:03 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:35.082 21:15:03 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:47:35.082 21:15:03 -- common/autotest_common.sh@1681 -- $ lcov --version 00:47:35.082 21:15:03 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:47:35.082 21:15:03 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:47:35.082 21:15:03 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:47:35.082 21:15:03 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:47:35.082 21:15:03 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:47:35.082 21:15:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:47:35.082 21:15:03 -- scripts/common.sh@336 -- $ read -ra ver1 00:47:35.082 21:15:03 -- scripts/common.sh@337 -- $ IFS=.-: 00:47:35.082 21:15:03 -- scripts/common.sh@337 -- $ read -ra ver2 00:47:35.082 21:15:03 -- scripts/common.sh@338 -- $ local 'op=<' 00:47:35.082 21:15:03 -- scripts/common.sh@340 -- $ ver1_l=2 00:47:35.082 21:15:03 -- scripts/common.sh@341 -- $ ver2_l=1 00:47:35.082 21:15:03 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:47:35.082 21:15:03 -- scripts/common.sh@344 -- $ case "$op" in 00:47:35.082 21:15:03 -- scripts/common.sh@345 -- $ : 1 00:47:35.082 21:15:03 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:47:35.082 21:15:03 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:35.082 21:15:03 -- scripts/common.sh@365 -- $ decimal 1 00:47:35.082 21:15:03 -- scripts/common.sh@353 -- $ local d=1 00:47:35.082 21:15:03 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:47:35.082 21:15:03 -- scripts/common.sh@355 -- $ echo 1 00:47:35.082 21:15:03 -- scripts/common.sh@365 -- $ ver1[v]=1 00:47:35.082 21:15:03 -- scripts/common.sh@366 -- $ decimal 2 00:47:35.082 21:15:03 -- scripts/common.sh@353 -- $ local d=2 00:47:35.082 21:15:03 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:47:35.082 21:15:03 -- scripts/common.sh@355 -- $ echo 2 00:47:35.082 21:15:03 -- scripts/common.sh@366 -- $ ver2[v]=2 00:47:35.082 21:15:03 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:47:35.082 21:15:03 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:47:35.082 21:15:03 -- scripts/common.sh@368 -- $ return 0 00:47:35.082 21:15:03 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:35.082 21:15:03 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:47:35.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.082 --rc genhtml_branch_coverage=1 00:47:35.082 --rc genhtml_function_coverage=1 00:47:35.082 --rc genhtml_legend=1 00:47:35.082 --rc geninfo_all_blocks=1 00:47:35.082 --rc geninfo_unexecuted_blocks=1 00:47:35.082 00:47:35.082 ' 00:47:35.082 21:15:03 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:47:35.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.082 --rc genhtml_branch_coverage=1 00:47:35.082 --rc genhtml_function_coverage=1 00:47:35.082 --rc genhtml_legend=1 00:47:35.082 --rc geninfo_all_blocks=1 00:47:35.082 --rc geninfo_unexecuted_blocks=1 00:47:35.082 00:47:35.082 ' 00:47:35.082 21:15:03 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:47:35.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.082 --rc genhtml_branch_coverage=1 00:47:35.082 --rc genhtml_function_coverage=1 00:47:35.082 --rc genhtml_legend=1 00:47:35.082 --rc geninfo_all_blocks=1 00:47:35.082 --rc geninfo_unexecuted_blocks=1 00:47:35.082 00:47:35.082 ' 00:47:35.082 21:15:03 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:47:35.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.082 --rc genhtml_branch_coverage=1 00:47:35.082 --rc genhtml_function_coverage=1 00:47:35.082 --rc genhtml_legend=1 00:47:35.082 --rc geninfo_all_blocks=1 00:47:35.082 --rc geninfo_unexecuted_blocks=1 00:47:35.082 00:47:35.082 ' 00:47:35.082 21:15:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:35.082 21:15:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:47:35.082 21:15:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:35.082 21:15:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:35.082 21:15:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:35.082 21:15:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.082 21:15:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.082 21:15:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.082 21:15:03 -- paths/export.sh@5 -- $ export PATH 00:47:35.082 21:15:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.082 21:15:03 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:35.082 21:15:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:47:35.082 21:15:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728414903.XXXXXX 00:47:35.082 21:15:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728414903.i6V7N1 00:47:35.082 21:15:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:47:35.082 21:15:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:47:35.082 21:15:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:47:35.082 21:15:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:35.082 21:15:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:35.082 21:15:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:47:35.082 21:15:03 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:47:35.082 21:15:03 -- common/autotest_common.sh@10 -- $ set +x 00:47:35.083 21:15:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:47:35.083 21:15:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:47:35.083 21:15:03 -- pm/common@17 -- $ local monitor 00:47:35.083 21:15:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:35.083 21:15:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:35.083 21:15:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:35.083 21:15:03 -- pm/common@21 -- $ date +%s 00:47:35.083 21:15:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:35.083 21:15:03 -- pm/common@21 -- $ date +%s 00:47:35.083 21:15:03 -- pm/common@25 -- $ sleep 1 00:47:35.083 21:15:03 -- pm/common@21 -- $ date +%s 00:47:35.083 21:15:03 -- pm/common@21 -- $ date +%s 00:47:35.083 21:15:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728414903 00:47:35.083 21:15:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728414903 00:47:35.083 21:15:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728414903 00:47:35.083 21:15:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728414903 00:47:35.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728414903_collect-cpu-load.pm.log 00:47:35.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728414903_collect-vmstat.pm.log 00:47:35.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728414903_collect-cpu-temp.pm.log 00:47:35.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728414903_collect-bmc-pm.bmc.pm.log 00:47:36.019 21:15:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:47:36.019 21:15:04 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:47:36.019 21:15:04 -- spdk/autopackage.sh@14 -- $ timing_finish 00:47:36.019 21:15:04 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:36.019 21:15:04 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:36.019 21:15:04 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:36.019 21:15:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:36.019 21:15:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:36.019 21:15:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:36.019 21:15:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:36.019 21:15:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:36.019 21:15:04 -- pm/common@44 -- $ pid=1967086 00:47:36.019 21:15:04 -- pm/common@50 -- $ kill -TERM 1967086 00:47:36.019 21:15:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:36.019 21:15:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:36.019 21:15:04 -- pm/common@44 -- $ pid=1967088 00:47:36.019 21:15:04 -- pm/common@50 -- $ kill -TERM 1967088 00:47:36.019 21:15:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:36.019 21:15:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:36.019 21:15:04 -- pm/common@44 -- $ pid=1967090 00:47:36.019 21:15:04 -- pm/common@50 -- $ kill -TERM 1967090 00:47:36.019 21:15:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:36.019 21:15:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:36.019 21:15:04 -- pm/common@44 -- $ pid=1967114 00:47:36.019 21:15:04 -- pm/common@50 -- $ sudo -E kill -TERM 1967114 00:47:36.019 + [[ -n 1498336 ]] 00:47:36.020 + sudo kill 1498336 00:47:36.030 [Pipeline] } 00:47:36.045 [Pipeline] // stage 00:47:36.050 [Pipeline] } 00:47:36.064 [Pipeline] // timeout 00:47:36.069 [Pipeline] } 00:47:36.082 [Pipeline] // catchError 00:47:36.088 [Pipeline] } 00:47:36.102 [Pipeline] // wrap 00:47:36.108 [Pipeline] } 00:47:36.121 [Pipeline] // catchError 00:47:36.130 [Pipeline] stage 00:47:36.132 [Pipeline] { (Epilogue) 00:47:36.145 [Pipeline] catchError 00:47:36.147 [Pipeline] { 00:47:36.159 [Pipeline] echo 00:47:36.160 Cleanup processes 00:47:36.166 [Pipeline] sh 00:47:36.457 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:36.457 1967276 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:36.457 1967398 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:36.471 [Pipeline] sh 00:47:36.757 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:36.757 ++ grep -v 'sudo pgrep' 00:47:36.757 ++ awk '{print $1}' 00:47:36.757 + sudo kill -9 1967276 00:47:36.769 [Pipeline] sh 00:47:37.057 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:03.618 [Pipeline] sh 00:48:03.905 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:04.164 Artifacts sizes are good 00:48:04.178 [Pipeline] archiveArtifacts 00:48:04.185 Archiving artifacts 00:48:04.338 [Pipeline] sh 00:48:04.640 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:04.654 [Pipeline] cleanWs 00:48:04.664 [WS-CLEANUP] Deleting project workspace... 00:48:04.664 [WS-CLEANUP] Deferred wipeout is used... 00:48:04.671 [WS-CLEANUP] done 00:48:04.673 [Pipeline] } 00:48:04.690 [Pipeline] // catchError 00:48:04.701 [Pipeline] sh 00:48:04.996 + logger -p user.info -t JENKINS-CI 00:48:05.005 [Pipeline] } 00:48:05.014 [Pipeline] // stage 00:48:05.018 [Pipeline] } 00:48:05.027 [Pipeline] // node 00:48:05.031 [Pipeline] End of Pipeline 00:48:05.055 Finished: SUCCESS